The end of work is actually an old story freshly told. Our predecessors have heard this tale two centuries ago. And then again in the 1950s and in the 1970s. Today, academia, consulting companies and media swear on it.
Prof. Dr Caspar Hirschi spells out similarities and differences in these storylines, as well as the scientific data behind them. This brilliant appraisal helps us look at the debate with new lenses.
Apocalypticists of Automation
This tantalizing title begins a long-term study on the relationship between productivity growth and human employment published in in June 2017 by David Autor, economist at the Massachusetts Institute of Technology, and his colleague Anna Salomons from the University of Utrecht . The two authors analysed data from 19 industrialised countries, covering the period between 1970 and 2007. Their analysis aimed at elucidating, from the more recent past, the answer to a much-discussed question about the near future: do efficiency gains through automation reduce the need for human labour? Autor and Salomons found no evidence to support that conclusion. As a rule, countries with strong productivity growth were also able to raise employment levels. While advancing automation has led to a decline in industrial sector employment, it has been more than offset by job growth in other sectors. Does the finding come as a surprise? Not at all: The study largely confirms what observers have been seeing for a long time.
An old story, freshly told
The attention paid to the study had less to do with its content and more to do with its framing. Autor and Salomons illustrate how predictions of the end of work as a result of automation have little basis in acquired experience. Machines have been replacing human labour for over two hundred years. At first, with the introduction of the spinning jenny and other weaving machines, the transformation was all about the automation of physical skills. By the end of the 19th century, however, the digital programming system used in industrial looms -- the punch card -- was also employed for computation services. Machines, hence, took over intellectual work from humans, and quickly became much more efficient at certain tasks. In 1890, the Census Bureau of the United States introduced a punch-card programme developed by German-born engineer Herman Hollerith for the national census. The Bureau thereby accelerated the evaluation of the collected data by several years. Since then the pre-eminence of machines in data computing has massively expanded. And yet, after almost 130 years of automating mental activity, we still lack evidence that humans might someday run out of work.
Throughout modern history, the fear of automation-related mass unemployment has come and gone in cycles. The gaps in between are usually so long that most people forget having conceived such dark scenarios about the future without seeing any corresponding consequences. It's a bit like announcing the biblical Apocalypse in former times: between the first and the eighteenth centuries, every generation of Christians expected the end of the world in the immediate future. Meanwhile, doomsayers were never embarrassed by the fact that all the apocalypticists before them had been wrong. Their reasoning followed a certain logic: God in his infinite mercy had given devout sinners a deferral. However, now his patience was definitely exhausted and the end was near.
It's a bit like announcing the biblical Apocalypse in former times: between the first and the eighteenth centuries, every generation of Christians expected the end of the world in the immediate future.
The title “Robocalypse Now?” gets to the crux of the matter in many ways. Besides the Bible, it also alludes to Francis Ford Coppola's Vietnam War classic “Apocalypse Now”. In 1964, while a hidden war against the Viet Cong and North Vietnamese escalated in Vietnam, American President Lyndon B. Johnson instituted a “Commission on Technology, Automation, and Economic Progress” to investigate whether the huge productivity gains the world was experiencing could become a work-destruction machine. It was not the first and would not be the last commission of its kind. The reports that periodically arose reveal ideas and recommendations that are still presented today as if they were new recipes for radical times. For example, the aforementioned Commission of 1964 recommended a minimum income allowance to the US Congress:
“Persons with incomes below an acceptable standard would receive a tax rebate or cash allowance, just as persons with incomes above exemption levels pay taxes.”
Those who conjure the phantom of automation-related unemployment usually also invoke other spirits from the past. Universal basic income is one of them.
The 1950s had better reasons to fear automation
Compared to today, there were better excuses in the 1950s and ‘60s to fear work withering away as a result of automation. The breakthrough in semiconductor technology – which led to the development of integrated microelectronics and launched the computer age – had taken place not long before. In 1947, a research group led by John Bardeen, Walter Brattain and Martin Shockley had assembled the first transistor in the Bell Labs of AT&T. In the early decades of the post-war period, it was difficult to estimate how rapidly computer technology would develop and what impact it would have on the labour market. Experts predicted, for example, a quick implementation of translation programs. These, they thought, would save valuable time in evaluating Cold War intelligence intercepts. At the same time, they would make translators obsolete. Hardly anyone would have thought that this prediction would remain unfulfilled more than sixty years later, despite the undeniable rapid progress of translation software as of late (especially DeepL).
Despite nearly full employment, another factor drove expectations of the end of the work after the Second World War: a revolution in home appliances which continues to affect our lives today. Washing machines, dishwashers, refrigerators, electric stoves, hand mixers and televisions were all widely adopted in the 1950s. They raised hopes for a fully automated household that even exceeded the horizons of what we imagine today.
In the 1955 study “The Age of Automation”, economic sociologist Warner Bloomberg imagined a thinking stove “to which one can say, in effect, ‘Roast me a three-pound chicken,’ and then leave the rest to the machinery”.
Meanwhile, Bloomberg conceived a home-based production process as in a fully automated factory. “Out of the freezer via a conveyor-belt system would come a bird of the correct size. At the stuffing machine it would be joined by pre-prepared dressing (probably preserved by radiation) and thence it would proceed into the stove,” he wrote. “After being roasted to perfection, it would be grasped by a transfer device and placed upon another conveying system which would hand it over to the ingenious, multi-bladed, carving machine.”
The example of Bloomberg's stove is not only interesting because it is an old vision of the future which still seems futuristic to us today. Even more revealing is the fact that although the fully-automated device has long been technically feasible, it has never seen the light of day. The deciding factor for its non-existence is the device’s lack of desirability. Technological innovations do not have the deterministic inevitability that automation prophets like to imply. No matter how intelligent new technologies are, if they do not generate human demand, they become smart garbage.
Finally, the 1950s and ‘60s, compared to the 2010s, showed higher productivity growth driven by a larger industrial sector. Productivity gains in the industry created strong spillover effects, which in turn contributed to unpredicted growth in the services sector. Hence, in the 1950s, the fear of automation-related mass unemployment was at least plausible in light of the economic dynamics.
Today, however, things are different: we are looking back on a decade of extremely sluggish productivity growth. Also, current economic data provides no evidence that could serve as a statistical basis for the often-propagated argument that the economy is changing ever faster. The idea that we recently experienced the greatest acceleration in history requires an enormous dose of historical oblivion.
It’s not digitisation, stupid!
What factors have led to yet another fear of an automation boom in recent years? The reference to digitisation as a fundamental technological push is not sufficient as an explanation, because the digital transformation of economic life has been with us for decades. Three additional factors contribute further to the specific shape of today’s automation debate. First is the influence of the consulting industry – a service sector, too, whose size owes itself to a spillover effect from more productive sectors. Global consulting firms are among the most energetic advocates of the automation debate. To this end, they employ a phrase which helped them proclaim a supposedly fundamental shift before the last financial crisis: “This time is different!” Before 2007, these firms predicted a new age of financial stability with the same slogan. Now they announce a new age of technological instability. If consultants can convince clients that everything will be different in the near future, they can also manage to increase clients’ need for advice. Unlike the 1950s, automation fears today fuel a billion-dollar business.
Second, the media industry makes a decisive contribution to the current automation debate. Journalists are among those professional groups that are most affected by digitisation right now. Technology-related unemployment is a personal risk scenario for many media professionals. Media professionals transfer that risk from the specific circumstances of their industry to the economy as a whole with reports describing a future of intelligent robots and unemployed humans. It is noteworthy, however, that the media industry’s crisis has little to do with the fact that the journalism profession is threatened by automation or artificial intelligence. Rather, the cause of the crisis lies in the network effects of digital platform capitalism. Such network effects allow Google and Facebook to establish an oligopoly on online advertising without actually producing their own media content. When journalists invoke the automation apocalypse, they distract the public from the real challenges of digitisation for the capitalist economy and democratic politics.
network effects allow Google and Facebook to establish an oligopoly on online advertising without actually producing their own media content
A third factor that fuels the current fear of automation is science. Researchers in the social and natural sciences today are evaluated using quantitative performance parameters, which at the same time function as unintentional incentive systems. To achieve as many citations as possible, it is useful, for example, to publish on topics that are heavily hyped. In this way, a thesis paper on the future of automation in the United States can accumulate many citation points for the authors, even if it is methodically questionable. Symptomatic of this is the tremendous response that the economist Carl Benedikt Frey and the computer scientist Michael A. Osborne achieved with “The Future of Employment: How Susceptible Are Jobs to Computerization?”. The working paper, published on a University of Oxford server in 2013, made the two authors famous worldwide overnight. All it took was a prediction with pseudo-mathematical accuracy in the abstract: “According to our estimates, about 47 percent of total US employment is at risk.” How did the two authors come to this precise estimation – “about 47 percent” – of a future risk, which they predicted rather imprecisely would arrive in “some unspecified number of years, perhaps a decade or two”?
Frey and Osborne started with a list from the U.S. Bureau of Labor Statistics with approximately 700 job titles, made available by the U.S. Department of Labor on a database called O*NET. On page 30 of their working paper, they described what they did with the list:
“First, together with a group of Machine Learning researchers, we subjectively hand-labelled 70 occupations, assigning 1 if automatable, and 0 if not. For our subjective assessments, we draw upon a workshop held at the Oxford University Engineering Sciences Department, examining the automatability of a wide range of tasks. Our label assignments were based on eyeballing the O*NET tasks and job description of each occupation.”
It cannot be a coincidence that the methodological explanations were hidden in the middle of the 72-page study. Let's take another look at what Frey and Osborne did: first, they went through the list of professions and highlighted the profiles that they consider to be automatable from their subjective point of view. After that, they invited a small group of machine learning specialists, who, as a profession, have a history of chronically overestimating the potential of their own technologies. With these specialists, the authors looked at the tasks of each profession and discussed their automation potential. Then the crucial part of the study followed: the quantification of non-representative opinions with the aid of a few formulas. The result was the figure “about 47 percent”, which is inserted in the abstract as if it were the result of a mathematically precise calculation.
Publicly, however, the calculation has tallied up. Many media present the figure of the “Oxford study” as a quasi-scientific fact and integrate it into their automation narrative. The risk becomes a certainty. When I was interviewed for a documentary about the history of robotics a few months ago, I was confronted with the statement:
“A much-cited study by Oxford University says about 50% of today's jobs will be replaced by robots over the next few decades.”
Many media present the figure of the “Oxford study” as a quasi-scientific fact and integrate it into their automation narrative.
When Frey and Osborne published their paper in a scientific journal in 2017, the ominous figure of 47 percent had disappeared from the abstract.
The constant circumstances of automation scares
Besides factors specific to each automation hype cycle, are there general conditions that must be met to awaken the fear of a labour market apocalypse? Examining this phenomenon over the last few centuries, the fear of automation seems to be based on an unholy alliance similar to the ones we know from politics. The alliance, in our case, consists of technology enthusiasts on the one hand and critics of capitalism on the other. Technology enthusiasts deem machines to be an uncontrollable force that will soon surpass humans in all activities and destroy everything that exists in order to build a better world. The critics of capitalism see that the moment has come for capitalism to devour its own children and push humans to the margins of history. In order for an automation hype cycle to emerge, the futuristic fantasies of progress apostles and doomsday prophets must meet at one crucial point: Envisioning a Darwinian struggle between humans and machine, the outcome of which has long been established.
There seem to be times that offer a particularly favourable environment for such unholy alliances. In the 1950s, 1970s and 2010s, the experience of overcoming an economic crisis and the expectation of an imminent technological boost sufficed to fuel prophecies of automation ending work. Since economic ups and downs are just as much constants of capitalism as technological innovation, even the apocalypticists of automation are unlikely to run out of work any time soon. Remember – their Christian predecessors needed over a millennium to stop interpreting every comet and earthquake as an announcement of the imminent end of the world.