In a recent article, Kevin Drum makes a compelling argument that You Will Lose Your Job to a Robot—and Sooner Than You Think. The piece is a little long, but this is a must read for everyone. At the heart of his argument lies the exponential progression of artificial intelligence (AI). Using the human brain as a barometer, AI will reach one tenth the power of the human brain by 2035. By 2045, we will have full human level AI.
The author responds to AI skeptics and their disbelief in the notion of significant technological unemployment. Here are the common doubts addressed (check out the article for the authors response):
We’ll never get true AI because computing power won’t keep doubling forever. We’re going to hit the limits of physics before long
Even if computing power keeps doubling, it has already been doubling for decades. You guys keep predicting full-on AI, but it never happens
Okay, maybe we will get full AI. But it only means that robots will act intelligent, not that they’ll really be intelligent
Waves of automation—steam engines, electricity, computers—always lead to predictions of mass unemployment. Instead they just make us more efficient. The AI Revolution will be no different
As Mr. Drum explores a future where the meaning of work changes, he describes the four basic categories of work, and the likelihood that AI will replace the category:
Routine physical: digging ditches, driving trucks
Routine cognitive: accounts-payable clerk, telephone sales
Nonroutine physical: short-order cook, home health aide
Nonroutine cognitive: teacher, doctor, CEO
The author describes a future where routine jobs will be the first to go, and by 2060, all non-routine jobs will follow. Although 2060 seems far away, the employment apocalypse happens much sooner. As AI takes over, the role of work in society will change.
The author posits that one way or another, the answer to the mass unemployment of the AI Revolution involves some kind of sweeping redistribution of income that decouples it from work. Or a total rethinking of what “work” is. Or a total rethinking of what wealth is. He explores some possibilities:
- The welfare state writ large
- Universal basic income #1
- Universal basic income #2
- A tax on robots
- Socialization of the robot workforce
- Progressive taxation on a grand scale
- Wealth tax
The author correctly suggests that the topic is not getting the proper level of political discourse, and provides his own point of view on the political aspects of this discussion. No matter where you fall on this very polarizing topic, educating ourselves is the best path towards creating a prosperous future.
15 thoughts on “Will a Robot Take your Job?”
Pretty Cool Thought; Thanks for sharing; Imagine a new wave of ‘human doctors’ for these robots – Folks who will design, assemble and program these robots; Folks who will market, sell, repair and maintain these machines; Folks who will build new functions and manage obsolescence of these machines; and more
until the machines can do all that for themselves
Say we were building a race car. No matter how we cleverly we tweak the engine to boost performance, the car can still only go as fast as the weakest part. It’s impressive if we create an engine that can propel the car at 1000mph, but it’s a pointless exercise if the tires of the car can only handle 150mph.
Futurists need to learn how to think holistically. They need to think about the entire picture, all the parts involved in the human endeavor, and not just the latest fancy new gizmo that’s fun to speculate about. They need to be looking, not to exciting technological change agents, but for the weakest part in the machine, the single point of failure.
The greatest futurist teacher at the moment may be Donald Trump. He may be the most visible symptom of what’s going to increasingly happen as more and more people are left behind by knowledge driven change, or even just fear that they will be left behind. It’s happening all over western culture, the search for over promising reactionary strongmen who claim they can take us back to the “good old days” when we felt we were in control of our destiny. The election of Trump should prove to us that when we people feel their backs are against the wall, we can’t count on them to be rational, even in the pursuit of their own self interest.
Futurists are very well educated people, members of the intelligentsia, and seem to suffer from biases common to that class. They seem to assume that those being left behind will just conveniently lie down and die to make way for the glorious new day that is coming. But we’re never going to arrive at that glorious futurist day unless we keep our focus on the weak link, the single point of failure which can bring the house down.
Don’t look to the strongest point in the system, look for the weakest point.
LikeLiked by 1 person
Yes, eventually, over time, an entire new society would emerge that would likely be better than where we are now. The industrialism of agriculture seems a good example of that.
The problem is the pace of change.
All the futurists I’ve read predict about what Kevin Drum is pointing to, an exploding pace of change. And not just exploding, but accelerating too, exploding ever faster.
If we could control the pace of change and introduce changes gradually over multiple generations then it might work. But if we allow the pace of change to run wild like an uncontrolled force of nature it’s probably going to crash the system.
This is the first time we’ve attempted to implement such radical changes so quickly. It’s just simple logic to assume that anyone attempting something so new and so big is probably going to screw it up the first time, and maybe the next ten times too, until they figure out the right way to proceed. The bottom line question is, do we survive the learning curve?
Futurists writings often sound technically savvy but immature, like a group of ten year old boys who stole dad’s car keys, and are now sitting in the driveway saying to each other in excited voices, “Let’s see how fast this baby will go!” I get that such a perspective makes for good articles and speeches, perhaps because insanity is fertile ground for compelling entertainment?
I see nothing on the horizon that will control the pace of change – in fact, I see it getting much faster. The single biggest difference between this revolution and everything that came before it
Hi Frank, well, the something on the horizon that will control the pace of change is us. We will adapt to the revolutionary new environment being created by the knowledge explosion, or we will die, that is the natural law which rules over everything including AI.
I agree learning to control knowledge will be very difficult. It may be impossible. Of course none of us can claim to know for sure. But given the price tag for failure it seems the only rational response to the challenge is to stop saying it’s impossible, assume that taking control is possible, and get on with trying.
It’s remarkable the supreme confidence we have when it comes to developing new knowledge, and how incredibly defeatist we can be when it comes to taking control of the process. As example, whenever I mention this on science forums someone with a PhD inevitably proclaims the only solution to be human migration across the galaxy. To me, this is the new Luddite position, clinging to a past when we could just move over the next hill once we’ve trashed our current location.
LikeLiked by 1 person
Consider a dam. Water becomes a very useful resource once we have it under our control. If we can release the water in the river at a controlled pace, we win. Without the dam, sooner or later the river running wild will flood the town below.
We should be thinking of knowledge as an element of nature like water. The price tag for continued progress is to learn how to build knowledge dams, so that we are in control of the pace at which knowledge driven change unfolds.
Futuristic thinking seems stuck in the previous era when knowledge was hard to come by and so we celebrated each new piece of the puzzle as it emerged. What we should be celebrating going forward is our ability to build the dam, to take control of this force of nature.
Instead of saying “AI is amazing, it will do this, it will do that” we should be asking…
What do we want AI to do? And when? How fast do we want to bring it online? How are we going to control the pace of it’s emergence? To what degree should it emerge at all?
Phil you know I understand your argument and agree with you – I am merely expressing what I am hearing and seeing – and I don’t see these issues being top of mind for anyone that can actually do the things you are suggesting
I understand and agree, and apologize for my typoholic mania. Perhaps I should learn to control that before demanding we control the knowledge explosion, that would be a reasonable rebuttal to my too many posts.
What I’m looking for is a balanced debate between those who want to go full speed ahead, and those urging we slow down. Does such a debate exist? I’m objecting mostly to my perception (perhaps uniformed) that even the highest authorities are trapped inside an outdated “more is better” group think. This impression seems validated by your observation that you “don’t see these issues being top of mind for anyone that can actually do the things I am suggesting”. That’s the impression I have too.
If that’s true, how do we rock the group consensus boat? How do we find the thought leaders on such topics? If they are all stuck inside the outdated “more is better” paradigm, how do we engage them and effectively challenge that perspective?
I’m very confident I can present an inconvenient challenge to the group consensus, but unlike you I have no credibility, credentials, connections etc that would allow me to engage the cultural authorities people might actually listen to. If my confidence is overblown, ok, another good solution would be for someone to debunk my positions, which I don’t actually want to be correct.
If it’s true that we basically agree, what are we going to do about that?
I think your conclusions are spot on. Accepting this is the last thing the political class wants – acknowledging a potential jobless future disrupts too many political agendas. Something significant has to happen before anything happens on that front. On the other hand, I do not support anything that slows down the exponential realization of those things that benefit society: curing cancer, clean energy, elevating the standard of living in emerging economies, etc.
It seems that innovations, like extracting oil from shale, vertical farming, IoT, solar power, 3-D printing etc., will expand and change work opportunities. Perhaps not enough to keep full employment, but enough to move and expand labor needs to new areas. What is impossible today, will be employing people tomorrow. I am thinking about roads and infrastructure that recharge electric vehicles, highways with embedded solar panels, etc. We need to retool the planet, but that takes will and purpose, which are both in short supply.
Short term bridge – many of the new jobs will exceed the skill levels of those that need work
What Kevin is speaking to has already worked, when we mechanized agriculture. The principle is good, it’s the pace of change that needs our attention. As example, if we’d put farm labor out of work before factory jobs were available for those affected the result would have been revolution.
The pace of change is a problem in itself if it undermines confidence in the future too much. As example, why should I get an education to prepare myself for a career if that career is likely to be gone not long after I get there? What’s the point of faithfully making mortgage payments my whole life if by the time I’m 50 my employers let me go, no one else will hire me, and so I lose the house? Civilization depends upon great masses of people making substantial investments in their future, which they are less likely to do if they don’t know what that future will be.
Well, how about this conception? We are the political class. Politicians are not leaders typically, but followers. They are looking to we in the public to tell them what we want, because they succeed in their careers to the degree they give us what we want.
Your next point is the tricky bit. The benefits and dangers of the knowledge explosion seem to be inextricably connected. Knowledge to do good is often converted in to knowledge to do bad, and of course there is the law of unintended consequences.
As example, providing better education in Afghanistan would seem to be an obvious benefit. But wait, if doing so too fast causes those who can’t adapt to attack the West with dirty bombs or worse, is it still a good idea? A wiser course may be to still introduce constructive change, but at a pace the society in question can successfully manage.
I’m entirely open to debate on this, but my perspective for now is that we ought to conceive of the knowledge explosion as a factory assembly line. The argument here is that the assembly line itself needs to be slowed down, which will inevitably affect both the good and bad outcomes rolling off the assembly line. Even at the slower pace we still have the ability to more carefully calculate what the assembly line should be producing.
The pace should be set by the weakest point in the cultural system, that is, any factor which has the potential to blow up the assembly line.