This morning I had the pleasure of discussing the future with Gerd Leonhard. Gerd was listed by Wired Magazine as one of the top 100 most influential people in Europe. His recent book on Technology vs Humanity begins the dialog on ethics in an exponential world.
Our theme was reimagining the future and the topics ranged from artificial intelligence to exponential progression. You can listen to the entire podcast below.
Edited for Clarity
GERD LEONHARD: Hello this is Gerd Leonhard, Futurist in Zurich, Switzerland. Today, I’m having a conversation with Frank Diana who works for Tata Consultancy Services, TCS, and he will talk about that in a second. Frank and I have been exchanging e-mails and messages with each other for years now and we’ve become fans of each other’s work. There’s lots of synchronicity between what we do, so we figured we’d have a short podcast conversation today, primarily about the topic of Reimagining the Future, because Frank is also a bit of a futurist. So over to you Frank. Just tell us briefly what you do and how you do it.
FRANK DIANA: Well, good morning Gerd. As you said, Frank Diana. I’m with Tata Consultancy Services and I am somewhat of a futurist and advisor. I spend most of my time focused on the next three to five years and beyond and what it might mean for leaders everywhere–not just business, but government, the impact to society and what that might mean for a business. My title actually is Principal, Future of Business, here at TCS.
GERD LEONHARD: Well that’s very interesting because the Future of Business is a topic we’ve been talking about for years now. So let’s talk about [it] briefly, because I kind of do the same thing in similar ways. I hold mostly speeches and do sessions. People that listen to this podcast would know what I do, so I’ll talk more about that later. I’m an expert at talking about myself. But let’s go right into the topic of the Future of Business. We actually talk about similar topics quite a bit and one of them is this idea that the future is exponentially different. This sounds like a Silicon Valley meme, but it’s really used quite differently in our world. So what do you think about this–the fact that the future is exponentially different. What do you make of that and what’s your key theme there?
FRANK DIANA: Well, as you and I have spoken before, I think there are two words there–exponential and combinatorial, in that the future is much more fast paced than we’ve ever experienced in the past. One of the underlying or foundational reasons for that, I believe, is that there are so many building blocks in place already. They can be combined in ways that create value fairly rapidly. And so, we’re starting to see innovation exploding, as a result of combinations that smart people are leveraging to create value in so many different ways. I do believe that we’re struggling with linear structures and linear thinking in a world that is much more exponential.
GERD LEONHARD: Well, you know I really love the combinatorial aspect of how you’re describing this. And by the way, to the listeners, if you go to frankdiana.net, you can see Frank’s graphics on this and you definitely have to download those. I think it’s really important to realize that the future is not just exponentially different. We’re now seeing one science breakthrough after the other, but they’re also combining, which is the combinatorial effect, to actually make the world entirely different. They’re amplifying each other. Sometimes I call this interdependence, as well. So for example, smart cities mean smart ports, smart transportation, logistics, a change of vehicles, change of lifestyle, which changes work. So they are all amplifying each other. I use a lot of these in my new book, “Technology versus Humanity”. I use that as the main argument to say that we can no longer expect the future to be observable in some way, like in a linear fashion, because now every eighteen months or so we have major breakthroughs that are literally changing the world. One of them, at CES this year, was the recognition that in the future we’re going to have intelligent digital assistants, rather than websites and apps doing the work for us and replicating how we’re thinking in the cloud, so to speak. I call this the Global Brain and you’ve talked about that Frank, several times, about how interfaces are changing and how technology is actually becoming intelligent now.
FRANK DIANA: Yes. Well, first a quick plug for your book. That was a phenomenal book–“Technology Versus Humanity” and hopefully we do speak a little bit about ethics as we go here. Yes, I have really focused on customer experience. It has been a big topic for several years now and I’ve always focused on the next generation of experience. Interaction, and how we interact with one another, has changed and will continue to change. As you consider conversational systems or intelligent systems, as you describe them, or the augmented and virtual reality world we are heading towards, how we interact with one another in society is going to change and has changed considerably. So what does that mean for us going forward? Are those mostly positive effects or are there some unintended consequences from just that?
GERD LEONHARD: Yes, interesting point. My view is that technology is inevitable and that these changes are coming, irrespective of whether we appreciate some of them or not. The important thing is going to be some sort of control or balance of technology–the good and the bad side, if a technology has bad sides. For example, the television had a bad effect on communications and creating your own music at home. But still, we have television now and so these technologies, like artificial intelligence, are bound to happen and they are already happening. The question really is–how do we turn them into the most flourishing part for humanity, rather than the scary part that has been depicted in the Hollywood Motion Pictures? So what is your view on that sort of control factor? How will we make sure the technology is going to be used for good or not.
FRANK DIANA: Well it’s interesting and as you mentioned in your book, ethics, and how we deal with all this, is obviously very critical going forward. And it’s the exponential pace that we started this discussion on that I think drives this. It’s the large impact that this is likely to have that drives this discussion. We, as you said, we’ve had technology changes for years now, but never really reached the point where we feared for humanity at some level, if some level of governance was not introduced. And so I believe that much like you’ve spoken, there has to be some governance body or bodies that emerge over time to help govern where this goes, because left to its own devices it will go on its own. We have to exploit and leverage the positive effects of technology and mitigate the risk of the unintended consequences. I know you speak a lot about this. I’ve seen some fascinating work where a nanobot or chip could solve the Parkinson’s issue, but at the same time could have major unintended consequences. So, how do you realize the good and mitigate the risk of the bad?
GERD LEONHARD: Well I use a theme in my book called “Magic, Manic, Toxic”–and that’s just to describe the technology. First is just magic, that we can all of a sudden do “this”, like send What’s App messages to a billion people around the world for free and that we have legal downloads of music and film streaming. That’s kind of magic, right? And then sometimes it gets manic, in the sense of obsessed, like Facebook and social media, so that all we do is repost photos of what we’re eating and that’s kind of manic and still a bit funny. But then, the last one is toxic, which means polluting our relationships with each other. For example, using dating services so that we literally forget how we actually date. We just use a shortcut to get to the result of dating, whatever that might be. It poisons the way we think of the world and it creates an illusion or essentially a simulation. I think this is the difficult part. We have to be able to say, “okay, it’s magic.” It may be a little bit manic because we’re so excited about it, but if we don’t want to poison who we are and to essentially change human-only processes, like relationships or substituting decision making–if we say, for example, that our decisions should be made by artificial intelligence because they’ll be perfect, like in medical areas or politics. I think that’s when we’re getting into the gray zone. I think you talk about this a lot in your blog posts as well. At a certain point, exponential goes beyond our understanding. So, there’s a discrepancy. We are not exponential, technology is. That means we’re going to have to put some sort of supervisory thing on top of this. Some sort of mechanism that protects us and basically creates what I sometimes call the “protection agency for humanity”, because we’re inefficient and technology is very efficient. That’s a conflict. Technology is trying and will try, to make us more efficient, in which case I think, we may lose something. So, that’s a real worry I’ve been having lately.
FRANK DIANA: I know you’ve spoken about “it’s not all about efficiency” in the past and I couldn’t agree more. I want to just focus on a point you made around algorithms. At some level, we’re trusting the algorithms and obviously as algorithms do more and more of our daily tasks, we’re going to trust them even more. There is a great story about, as you mentioned, dating websites. Network effects are critical for platforms and dating websites obviously leverage network effects in that they have women and men join the sites and sometimes, more men than women. So algorithms were tweaked to ensure that men weren’t finding women that were “out of their league”, if you will, which is actually a funny story. But it just talks to how if you’re trusting an algorithm to make a good match, it’s at some level making a decision for you. And do we really want to have that be the case?
GERD LEONHARD: I think one of the key challenges is that we are obviously not running our lives based on algorithms. In fact, we are way beyond algorithms. If I meet you somewhere, at a trade show for example, it takes the average person less than one second to figure out if the other person is interesting or part of that tribe or remotely talkable, without saying anything. And so we make decisions based on many complex human things. Many of them [decisions] might be data of some sort in the end. We just don’t know what kind of data. We don’t even know how people think or communicate in the complete way that we do. That’s why I think it’s kind of a stretch to say, as some people have said in the past, that basically all decisions in the future will be made by algorithms and by machines. Machines that have an IQ of fifty thousand or so. I think that’s a very dangerous and slippery slope. In my view, I think we should use that technology to get data points and to be faster, but always put the human back in the loop. I think at that point is where we differentiate in our thinking from the Silicon Valley–that is essentially looking to replace the human in the loop with the superior machine in their place.
FRANK DIANA: Yes. Maybe we can use the driverless car as an example. There’s lots of discussion around ethics, as it relates to the algorithms that have to make decisions at a point in time. As that [autonomous] car is driving for example, does it hit the pedestrian? Does it do whatever it needs to do to protect you, the driver, and therefore maybe hits the woman with the baby carriage? How do you see that playing out?
GERD LEONHARD: Well, I think there are some trivial things that we could do without that are not essentially human. For example, I believe that driving a car isn’t necessarily a human undertaking or a human right or anything that we would really need. I mean, we basically, drove cars temporarily, right? So, I could do without driving a car and if the car says it can do better, I’m fine with that if I can trust it. And that’s a big if. It’s not essentially human, but other things are essentially human. For example, hiring and firing people, creating a team, innovating something, telling a story, picking your partner. All these things are essentially human, in the sense that we shouldn’t substitute them with a shortcut that promises us the world, but in the end only delivers ten percent. I refer to that sometimes as “wormholing.” You know, using a wormhole through time and space. For example, some people say, “okay, I want to be a musician.” So I get an iPad, I download a bunch of apps and ten hours later, I’m a musician. That’s fantastic. If you can play music, then congratulations. That doesn’t mean that you are the same kind of musician that a guy who spent ten thousand hours learning piano. Right? There’s no judgment in that whatsoever. You want to be a musician. I think that’s great, but we should understand that there are some human processes that don’t have wormholes; that cannot be generated where there’s no app. As I like to say, there’s no app for happiness and we should not try to invent those. I think that’s where we are reaching too far and this is what I sometimes really worry about. The singularity in those kind of conversations in the Bay Area and Silicon Valley is about replacing all these things that are inherently human, chaotic and inefficient with something that’s very streamlined–but it only ends up being a very, very bad copy of what we do.
FRANK DIANA: Hmmm. A very interesting take on that and I’m sorry to interrupt. I want to play off of that Silicon Valley notion because I know we follow some of the same folks out of Silicon Valley and there is this growing notion that over time we’ll see a shift from a profit motive to a purpose motive. We do see the Elon Musks of the world and others that seem to have really good intentions solving the energy problems of the world. Or in Bill Gates’ case, solving the hunger problems of the world and clearly technology aids in all of that. Are you seeing that most of that is still profit oriented? Or do you really think there’s a genuine desire to solve some of these big challenges?
GERD LEONHARD: Well, I think there is. There’s lots of great people who have that genuine desire. I would say that’s probably true for most people in Silicon Valley, but they may be using it as a bit of a fig leaf. It starts out as a “feel good” solution to the challenge, but inadvertently converts to something that is no longer about the original purpose. I think one of the problems is a system of extreme capitalism, which I think Silicon Valley would qualify for. In extreme capitalism, it’s very difficult to perceive purpose as something that you can monetize. It’s very easy to monetize technology, because you can sell it. You cannot monetize humanity. Think about that for a second. How would you turn happiness into money? I think that is beyond capitalism. So at a certain point when technology has permeated every part of our lives, in roughly five to eight years–the point of singularity–is when that whole system will switch. What we do will no longer be about achieving that illustrious economic peak, but to seek a peak of happiness or purpose. That’s where we are going, to this kind of “Star Trek economy” which I think is very utopian, but bears out further looking. If you read books about capital, for example, Piketty’s book, “Capital”, I think [it] pretty clearly outlines that we are at the closing gate of this extreme capitalism, going to what I call post-capitalism and that is really part of the same conversation. Ultimately, technology will force us to face the fact that it’s about more than technology and making money with it, or more about growth and profit. This is the underlying challenge for us.
FRANK DIANA: Yes. I think the question is over what timeline? I agree and you talk about it in a post-capitalist sense. I’m referring to it as the “hybrid economy” that emerges over time because there’s still probably some element of capitalism that remains. I know Rifkin speaks of this in zero marginal cost kind of ways, about a collaborative commons emerging over time. But what does that timeline look like? Any thoughts? 2050? Sooner?
GERD LEONHARD: I think we would agree that lots of people talk about this including, of course, Kurzweil and others. It‘s roughly ten years to this point of singularity, probably less–which means that we’ll have the first computer that has the horse power, the brain power so to speak, of the human brain. We have that now, but they’re huge machines. So by default, a computer will be just as intellectually empowered as we are in less than ten years. One computer brain, one human brain. And then roughly in 2050, one computer brain has the power of all human brains, all ten billion human brains. At that point there are fundamental shifts that we can estimate here, and this will lead us to really define who we are. Are we really going to be doing the monkey work of bookkeeping or financial construction? We’re going to be doing things that only humans can do and I think there will be less and less of them, maybe, in the future! But there are still thousands of things that only we can do really nice–efficient, powerful storytelling; dealing with emotions; negotiation; all these things. I think that’s where we’re moving. This may be hard to monetize and that is one of the challenges. How do you monetize a company that may only have one thousand employees rather than two hundred thousand? Maybe a kind of Wal-Mart with fifty thousand employees, rather than two and a half million. How do you monetize what they are? How do you monetize the fact that they’re human or unique or creative? That is going to really derange our financial and of course our social system, [including] Social Security, pensions and all those kind of things.
FRANK DIANA: Yes, I agree with you. As that happens, do you envision social unrest along the way? Inequality is sure to be amplified as a result of all this. What happens in the next ten years, from a social standpoint, as this plays out?
GERD LEONHARD: Well I think the key question is inequality, the polarization of capital and the benefits of technology. The fact that they have all accumulated on one end of the spectrum, which I think both of us are lucky enough to be a little part of that, of course. But it is responsible for all of the things that we’re seeing right now, which is widespread frustration that you’re not getting part of the benefits, terrorism on the rise, which in overall terms is still very small compared to diseases and car crashes. But still, terrorism is driven by inequality and economics primarily, and then religion is a little bit of a slice on top of that. But I think that’s something we have to address. We have to take the benefits we’re deriving from technology and distribute them much more collectively in a way that creates more of a level playing field. We cannot, for example, invent a genome therapy that is going to cure cancer, and then we’ll just give it to people that have ten million pounds or dollars each. That would be the biggest driver of terrorism, if there ever was one. So this brings up a lot of fundamental questions about the economic system. I think ultimately we are forced to concede that we have to distribute the benefits of what we’re inventing much, much wider in order to create a livable society.
FRANK DIANA: Yes. Having just gone through the election cycle here in the U.S., it’s not like you’re hearing a lot of discussion about these things. I’m still concerned. Let’s go back to the start of our conversation, the exponential pace at which this is occurring is outpacing our linear structures. Is there an inflection point where those things catch up?
GERD LEONHARD: Well I think you’ll see and you have already seen that people only change because of two things: one is pain, the other one is love. We change when we have a reason. Otherwise, we don’t change. That’s just human nature. And so, we change when we hit the wall and it’s really painful. That’s when we consider change and this is exactly what’s going to happen now. First, we’re going to see really bad things happening as a result of not understanding our options. And then like Fukushima in Japan, it becomes the major decision making point for lots of countries to abandon nuclear power. And of course, they have backpedaled on that very heavily in the last couple of years. You will see things like, let’s say a hundred million DNA records being stolen and people being cloned inadvertently by the fact that their data is in the health cloud. And that’s where we’re going to see really, really, strong action and responses. So, I think we’re going to hit the wall first in many ways first. Let’s hope it’s not going to be too bad. And then, we fall in love with the positive parts of what we could be doing here. I think that will create the change that we really need and begin a wider discussion about the power of technology and what that means. That’s just now starting. I hope my book can contribute to that. And the tech companies themselves coming forward and saying–you know this is all fantastic, but why are we doing this? What is the purpose of our technology? This is the first time, for example, the big tech companies have joined in an AI organization, called the Partnership for AI, which is pretty much a fig leaf. But still, it’s a good start and I think we’re going to see that discussion will be raging as everything becomes automated, virtualized, and robotized. You talk about that a lot in your blog as well. I think this is where we need to find out what is the sense of the purpose behind all of this and how can we create a society that is truly human and not truly machine.
FRANK DIANA: Yes. And I love the way you focus on team human. I think that’s a great point of view and perspective and all of this leads me to a question. Given the uncertainty about the pace of change and what’s coming, I would think that futurists would be very critical at this point in time. Would you see, for example, futurists playing bigger roles on the boards of companies or as part of the executive teams of companies as we move forward, because there was a point where futurists fell out of vogue at some level? Are you seeing any shift there?
GERD LEONHARD: Well I’d like to say, and recently I’ve expanded on this when I am speaking, that the future is no longer what’s going to happen tomorrow. The future is what has already happened and we haven’t seen. And this is very important to realize the future is already here, as William Gibson keeps saying. But basically, we’re right in the middle, not looking in the right place. So it becomes essential for companies to have futuristic thinking, not just with futurists, but to understand that we live in a parallel world. We live in a world of now, which is how we make money now, what we do now. And then there’s a world of tomorrow which is partly here already. We have to have a foot in both worlds. This is essential if you want to survive. You could see the responses of the German car companies towards Tesla and Toyota towards Google. The initial response had been to do nothing. Now they’re finally getting it and they’re jumping in headfirst. They are saying “Yes, we can do that. We can live in the old world and we can live in the new world. And when the new world is here, we’ll be ready to take over.“ This is what they’re trying to do now. You have to have this hybrid approach. It is the most important thing I keep telling my clients. You can’t throw away what you have now, unless there’s a good reason. You keep on milking it as far as you can, as long as it’s not detrimental. Then you build the new reality, right next to it and it may be completely different. So as a pharma company, it’s quite clear you’re not going to be selling pills in twenty years. But how do you get from one to the next? Well, you have to play in both turfs.
FRANK DIANA: Yes. And as you know, I refer to some of this as reimagining the future. That visual that you reference talks about all these future scenarios and I’m a big believer that there has to be some work at the leadership level understanding some of these scenarios, where their potential paths could lead, what it means to them and how are they going to respond. And to your point, you do that in parallel. So, it’s sort of a future scenario analysis exercise with responses that you not just come up with, but then experiment with. That whole notion of experimentation and failing fast are the kinds of things that are not really the strong suit of the traditional companies. They have to get better at those things.
GERD LEONHARD: Well it’s basically science fiction is becoming science fact. Even if you are a scientist, you would agree that certain science fiction will remain science fiction, for a while, like uploading a brain and those kind of things or easily traveling to other planets. But other science fiction like voice control, like speaking into a computer, like the car driving itself, like automatic language translation, and so on, [will become science fact]. And, of course, battery technology, sustainable energy, solar and all this. We’re seeing a revolution pretty much every month. And as Peter Diamandis from Singularity likes to keep saying, this leads us to a point of where everything is possible. If you can answer the question of whether technology can do something or not, every time you just say yes it can. The question is just when? We have to change our horizons and stop looking at them in the future as an elongation of today, because it will not be just a continuation of what I do today. And this is very true of a futurist also. Our job is fundamentally changing in the fact that very soon computers and machines can entirely predict the future. That’s not what I do, but computers like IBM Watson can do that. But can they understand people? I think they can understand people by reading their faces, but that doesn’t mean they’re really in touch with what people actually do. In Germany, there’s a great word the Germans call “dasein”, which is existence. It’s widely used in philosophy. And therein lies the difference between humans and [machines], humans actually exist. We have a dasein. We are, as you would say in Yiddish, a mensch. Machines are not. They don’t actually exist as an entity. They could simulate all of that, but they don’t actually exist. I think this is a real difference to realize. We’re going to be relying amazingly much on machines in the future, but they’re not beings. They are simulations of beings. I think that’s where we should make the difference also.
FRANK DIANA: Yes. The human, as you keep saying, the human characteristics are the ones that really play out and are very important in the future. Our creativity, imagination, empathy, etc. become much more critical. I think even from a hiring perspective within companies, the kinds of people that they hire going forward, shifts as well.
GERD LEONHARD: I think we have partly similar clients in similar situations. I think a lot of incumbents are looking at this and saying, “Okay, I’m just going to invest very heavily in technology, especially exponential technology. I’m going to work with start-ups. I’m going to buy all these companies and then the world will be fine.” And that’s just not so. This is just the first wave of what we do. We invest in technology, of course we invest in technology. Clearly, everybody does. And that’s the name of the game. But what is beyond technology? I think it’s exponential humanity. It’s a way of saying that we use technology as a really amazing tool to help humans flourish. And if you don’t help humanity flourish, you don’t have a business. You just have a machine and the machine is worthless. Not right now, but the machine will be worthless. For example, you can see what happened to the music business–one individual song is just worthless, as an entity, as a unit. It is the access of the experience and everything around the song, and the flair of the artist, and the brand. That is where the value is. It’s not in the copy of the song.
FRANK DIANA: Yes. Great analogy and I know that’s your background in the music business having lived through that. So, we’re experiencing the same things in industry after industry.
GERD LEONHARD: Well, what is your take? When you speak to your clients, what is the primary stumbling block for them.
FRANK DIANA: The primary stumbling block, at least here in the US, continues to be a status quo mentality that is focused on short term results. Even if intuitively folks see what’s coming, I think they feel overwhelmed and can’t really address it. So, they’re kind of frozen or [have an] “ignore it and they’ll go away” kind of mentality. I think the driverless car has actually done us all a favor. The speed at which you saw that car and the story around it take off last year has done us all a service, because I think people are starting to pay heed to the notion that exponential progression is real.
GERD LEONHARD: Yes, it’s interesting how that all came about. Of course, really autonomous driving and driverless cars are still very limited. The fact [is] that they’re not actually driving machines like we’ve seen in science fiction. They do certain things and they work in certain situations, but you wouldn’t take a self-driving car on the German highway when it’s snowing outside and go two hundred miles an hour and not crash, right? I think this is a misconception. As Paul Saffo likes to say, we shouldn’t confuse a clear view with a short distance. A lot of these things are here, but they’re not here in the same way that we thought–like hop into the flying car and go off to buy groceries or something. I think clients look at this and say okay, the linear way of looking at this, and wait and see. As I like to say, wait and see means wait to die. Because by the time you’re ready to actually face the fact that things have changed, it’s way too late to change your business model. You just discard it. Look what happened to media and the record labels as the best example.
FRANK DIANA: Yes. That’s the point. What it did for us is to say “this is not twenty years into the future.” This will happen sooner. It’s not here today, but it will happen sooner than we think. I think that’s the key point. Because now, to your point, we need to start focusing on this. We need to understand it. We need to understand the potential implications and how we might respond. Whereas prior to that, I don’t think there was an acceptance that these things were coming at a rapid pace.
GERD LEONHARD: Well, I think the main thing that I face with my clients is that I need to get their assumptions checked in, because the assumptions that they have on how business works and how the world works is very much based on their previous success. So, you have been successful doing things X, Y, Z, and therefore the future holds that you want to do things a little bit better, cheaper, more efficient, and you’ll still be okay. But the message is you’re not! Because this future is not at all like what we have now. So, if I as a futurist am going to talk about what the world could look like in five years, I won’t have a job–because it’ll be obvious what the world looks like to everyone. And you can ask any app to tell you so. What technology cannot do is use the human way of understanding it. I think Alvin Toffler once said, actually Isaac Asimov, who said, he does not want to be a speed reader, he wants to be a speed understander. That’s really what we have to face–is that our role is changing. If you’re a doctor or a medical company or a hospital or a tourism organization, you have two roles. You have one today and then you have another one tomorrow, which may be completely different. [One] that you would never conceived–that all of a sudden AirBnB, for example, is now moving into its second generation of AirBnB, which is providing travel experiences. Not just renting places, but providing experiences. Companies that start on that premise, are much quicker to morph than those companies like the Morgans Hotel Group or the Hilton chain, because they come from a part of success that tells them that all they have to do is make it more efficient.
FRANK DIANA: Right. You call that assumptions and I think we talk the same language there. I’ve been calling them our belief systems and that our belief systems are being challenged. And really the first time, I’d say since the second industrial revolution, that at a deep level our belief systems are being challenged.
GERD LEONHARD: You know, I mean it all comes down to beliefs and understanding. And with beliefs I don’t mean anything like religion or so. Beliefs in what we believe in. What we believe in the world. How does the world work. There’s a great saying from China or Cambodia–I think, it says, “assumptions are the termites of relationships.” Because we’re assuming too much about the things we know, we don’t see other options and then we end up being misguided on some bizarre path that takes us to nowhere. Then our assumptions get smashed and we have to track back. This is the most important exercise I think for clients today, is to say how do you really think this works? What are you assuming your customers need? Are you still relevant in this world when that assumption changes? If we’re not indispensable, we are being dispensed. That’s digital Darwinism, basically. If you’re a bank teller, you’re being dispensed. If you’re a bookkeeper, you’re being dispensed. If you’re a person that makes images of trees and nothing else, you’re being dispensed. So, we need to move up the Maslow Needs Hierarchy. We have to move into the upper part of the pyramid which are realizations, understanding, meaning, purpose, you know. And that’s really happening to all professionals and in all companies that are made up of professionals, like consulting companies for example.
FRANK DIANA: Yes. Right. Well said.
GERD LEONHARD: I think that is one of the key challenges. So, do you want to wrap it up with something on your end? I think we have already gone on for quite some time. Do you want to give a final statement?
FRANK DIANA: Yes, sure. If you wrap all this up, this whole discussion, I think the term that I’ve been using and I think still very appropriate, is that we’re going to “reimagine the future”. We get a lot of discussions going around transformation and digital obviously continues to be a big buzz word. But at the end of the day and given everything we just talked about, if we don’t step back and reimagine this future, do it collectively, and do it in a way that avoids unintended consequences, I don’t believe we’re going to realize the benefits that science and technology will deliver to humanity. So again, to me it’s about reimagining.
GERD LEONHARD: Yes. Very nicely said. I sometimes say it’s really not about calculation, it’s about imagination. We calculate too much and we don’t imagine enough. Because when we imagine more things, then all of a sudden we can discover new ideas. It’s interesting to see that a lot of incumbents have a very hard time with imagination and the start-ups do not–because that’s all they’ve got, to imagine what it would be like, “if”. So, a message to our clients is let’s start imagining more and learn that from the start-ups, rather than calculating our return on investment all the time. That is what poisons the pie for us. It’s because we are always calculating and looking for efficiency and time saving. None of that will matter when we stop being important, when we stop being relevant. Efficiency won’t matter at all when I’m irrelevant.
FRANK DIANA: Well said.
GERD LEONHARD: This is a key point. Frank, just tell people quickly where they can find out more about your work.
FRANK DIANA: And you mentioned it earlier, frankdiana.net, is my blog and there’s a lot of content there supporting this discussion at a higher level. Folks can go there and they can contact me if they’d like.
GERD LEONHARD: Great. I’m Gerd Leonhard, G E R D. You can find me anywhere on the Internet. Just look for G E R D, like gastro intestinal reflux disease, right after that. Just put in “Futurist Gerd”and you’ll find me everywhere. My main website is futuristgerd.com, my new book is “Technology Versus Humanity.” It’s on Amazon or techvshuman.com. That’s pretty much everywhere as well. I have a pretty active YouTube channel, gerdtube.com, on YouTube. You can find me there. There’s about three hundred fifty hours. So if you’re bored on the weekend, you can watch it all on fast forward. Thanks very much Frank for sitting in and doing this together. And thanks everyone for listening. Hope to see you down the road. Bye.
FRANK DIANA: Thank you.
GERD LEONHARD: Bye