Digital dystopias are overdone but inequality is rising. The answer lies in treating data as a commons and Big Data as a collective-action problem.
The fear of the machine is back. Dystopian views of a world without jobs abound as autonomous vehicles, humanoid robots and super computers are thought to replace human workers. Recent progress in artificial intelligence has been stunning: machines are taking phone calls, understanding questions and suggesting solutions—often much faster than a human call-centre worker.
Nevertheless, so far, there is no evidence job loss is mounting. Among the members of the Organisation for Economic Co-operation and Development, unemployment rates are at their lowest since 2006. What has increased is income inequality. The digital economy demands new policy approaches to address this.
Inclusive growth is threatened in three ways. First, automation due to smart machines and computers is no longer confined to manufacturing but affects jobs—mostly middle-class—in services as well. Entry-level legal services, accounting, logistics and retail will see many tasks replaced by machines which require little oversight or maintenance by employees. Experience suggest that, rather than become unemployed, a growing number of those displaced will compete downwards, leading to further job polarisation.
Join our almost 30.000 subscribers!
"Social Europe publishes thought-provoking articles on the big political and economic issues of our time analysed from a European viewpoint. Indispensable reading!"
Columnist for The Guardian
Secondly, the exponential increase in energy consumption induced by complex algorithms suggests that new applications of artificial intelligence will be geared more towards capital-saving and factor-enhancing innovations. Network management—for instance, electricity grids or traffic-control systems for ‘smart’ cities—as well as expert systems in research, agriculture or health care will likely dominate pure automation innovations. This will lead to a further rise in skill-biased technological change, a trend observed over past decades.
Last but not least, digital companies concentrate profits and wealth as they collect and exploit vast amounts of data for their algorithms to individualise prices and product offers. The underlying network externalities allow innovative first movers to gear up, leaving new entrants little chance to compete for market shares or profits. And algorithmic biases—the tendency of machine decisions to replicate the discrimination deeply engrained in the historical data on which these routines are run—compound the inequality challenge.
Traditional answers to inequalities arising from technological progress remain relevant. Taxing excess (corporate) profits, ensuring consumption of digital services is taxed where it is consumed—currently it is not—and strengthening collective bargaining, to ensure that benefits are widely shared across the economy, remain the first line of response. Similarly, combining digital social security with lifelong learning helps displaced workers invest in their competences to find better-paid jobs.
These measures, while necessary, are however unlikely to be sufficient to reduce inequalities. For a start, the network effects entailed in the digital economy will not allow all companies to raise pay to the same extent as the front runners, leaving profit-sharing arrangements or collective bargaining with little leverage to improve equity.
Listen to the latest episode of Social Europe Podcast
More importantly, the new business model of this ‘surveillance capitalism’ is based on collecting data without barriers to access, exploited with proprietary algorithms, and the sale of this market intelligence. While the data come free—and users are often all too willing to give up their privacy—their harvesting is not, as data collections are protected by intellectual property rights and so competition as to who is able to develop the better algorithm is stifled. Only when access to data is unhindered and shared at little or no cost can new entrants genuinely compete, as for instance in the business of automatic translation, which relies on web-scraping of freely available translations.
Suggested solutions such as strengthening market contestability by ensuring data portability—for instance, through the development of interfaces and standards—are helpful but again insufficient, as few users will be bothered to maintain several different social-media web sites to ensure competition among providers. Breaking up large digital players, such as Facebook or Twitter, through vertical disintegration would also not solve the underlying problem of securing access to the data collections these companies monopolise.
Recently, Glen Weyl and his co-authors suggested an innovative solution—establishing data property rights so that data become labour. Each user would be remunerated according to how much value his or her data input created for a digital content provider. Moreover, users would be able to trace where and in which context their data were being used, possibly blocking certain forms of data usage (for instance for military purposes).
As elegant as this solution appears, it has one drawback: more data do not mean better algorithms. The few individuals whose data prove highly relevant would get large returns, but the average user would still not see much from their data input.
Instead, natural monopolies arising in the digital economy should be treated similarly to other ‘commons’ problems. Rather than (only) strengthening individual property rights to regulate externalities, governments should extract incomes to build up public capital. Such approaches exist already in the form of sovereign or social wealth funds and have been implemented for a variety of assets, although often related to natural resources.
Considering data as a commons which allows the extraction of rents would help restore the balance between individual data suppliers and corporate platform providers. Most importantly, with governments investing in such platforms through citizens’ wealth funds, it would leave incentives for algorithmic development intact and would still allow for stricter competition and improved individual returns on data furnishing.
Only when we treat data ownership as a collective-action problem can we hope to address the continuous rise in inequality.