The attack on the US Capitol revealed the dangers of Big Tech media platforms—but envisaged EU competition laws won’t fix them.
The European Commission has earned a reputation as the world’s foremost regulator of Big Tech companies. Led by the commissioner responsible for ‘a Europe fit for the digital age’, Margrethe Vestager, its latest salvo was launched on December 15th, with proposals for a Digital Services Act (DSA) and a Digital Markets Act (DMA). But on January 6th the inadequacy of these proposals was laid bare, with the gripping images of a deadly mob ransacking the US Capitol.
The attack on the heart of American democracy was incited and planned over Facebook, Twitter, YouTube and other digital-media platforms and represents a warning to Europe. Unfortunately, the commission’s approach is poorly equipped to deal with the toxicities of the digital-media platforms’ business model.
Vestager retains the competition brief and the DSA/DMA interventions are narrowly framed around impacts on competition and consumers. But most of the worst atrocities of the Big Tech media cannot be dealt with through a competition lens. The commission’s proposals lack not only regulatory teeth but a broader vision of how digital-media platforms should work—my contacts in Silicon Valley are rolling their eyes in amusement.
Since the birth of the ‘social media’ platforms 15 years ago, democracies around the world have been subjected to a grand experiment. Can the news and information infrastructure—lifeblood of any democracy—be dependent on digital technologies that produce a global free-speech zone of unlimited audience, with algorithmic (non-human) curation of massive volumes of mis- and disinformation, spread with unprecedented ease?
The evidence is clear that this experiment has veered frighteningly off course, like a Frankenstein monster marauding across the social landscape. Facebook is now the largest media giant in history, a combination of publisher and broadcaster with some 2.6 billion regular users and billions more on its subsidiaries WhatsApp and Instagram. A mere 100 pieces of Covid-19 misinformation on Facebook were shared 1.7 million times and had 117 million views—that’s far more than the New York Times, Washington Post, Bild, Daily Mail, Le Monde, Süddeutsche Zeitung, ARD, BBC and CNN can command, combined.
Join our growing community newsletter!
"Social Europe publishes thought-provoking articles on the big political and economic issues of our time analysed from a European viewpoint. Indispensable reading!"
Columnist for The Guardian
The digital-media empires have been used for disinformation campaigns in more than 70 countries to undermine elections—even helping elect a quasi-dictator in the Philippines—and to amplify and even livestream child abusers, pornographers and the Christchurch mass murderer. How can there be concerted action on climate change when a majority of YouTube climate-change videos deny the science and 70 per cent of what YouTube’s two billion users watch comes from its sensation-driven recommendation algorithm?
Traditional media are subject to certain laws and regulations, including a degree of liability for what they put out into the world. While there is much to criticise, at least they use humans to curate the news, and choose what’s in and out of the news stream. That results in some accountability, via potential libel actions and other checks and balances.
But the algorithmic curators are on automatic pilot. That is dangerous in a democracy—much like killer drones, for which no human then takes responsibility or experiences liability. The companies have enabled division, distraction and outrage to the point where a fractured society lacks shared truths or even sensemaking. Non-human curation, when combined with unlimited audience size and frictionless amplification, has failed as a foundation for a democracy’s media infrastructure.
Vestager and the commission do not seem to recognise that the competition paradigm utterly fails to address these dangerous abuses. True, specific DSA/DMA proposals have merit. If turned into law, all European Union member states would have their digital services and markets covered by one set of laws, instead of 27. The companies would have greater responsibility for removing ‘illegal content’ (vaguely defined) in a timely manner (an EU version of the German Facebook law). The legislation relies heavily on transparency of platforms and algorithms, but to paraphrase a Silicon Valley motto, ‘A lie can travel halfway around the world while transparency is putting on its shoes.’
We need your help! Please join our mission to improve public policy debates.
As you may know, Social Europe is an independent publisher. We aren't backed by a large publishing house or big advertising partners. For the longevity of Social Europe we depend on our loyal readers - we depend on you. You can support us by becoming a Social Europe member for less than 5 Euro per month.
Thank you very much for your support!
Consumers, moreover, would have a right to ‘opt out’ of content recommendations. But that gets the accountability arrow backwards: the default should be no private data collection without user consent. Why is the EU continuing to permit this broken model and the noxious surveillance inherent in it?
The commission’s old approach simply does not dig deep enough to grapple with the distinctive qualities of the Big Tech animal, which requires a new taxonomy. And with Silicon Valley constantly innovating to break rules and find loopholes, it’s time for a major reset—not only to save our democracies but also to provide the best chance to redesign these digital-media technologies, so that we can rediscover their promise and reduce the risks.
These businesses are creating the global public infrastructure of the digital age. That includes search engines (Google), portals for news and connecting (Facebook, Twitter, Instagram, WhatsApp), films, music and live-streaming (YouTube, Netflix, Spotify, Zoom), navigation (Google, Apple), commercial marketplaces (Amazon, Amazon, Amazon) and labour-market platforms (Uber, Upwork, Amazon Mechanical Turk, Clickworker).
The companies like to remind us that they are providing this for free—that all we have to do in exchange is hand over unlimited access to our private data. But that has turned out to be a very high price indeed. Vestager herself has disputed that these services are actually free, at one point declaring: ‘I would like to have a Facebook in which I pay a fee each month, but I would have no tracking and advertising and the full benefits of privacy.’
So why is she holding back? The EU should require a whole new business model, based on treating these companies more as investor-owned utilities. Historically speaking, this is what Europe and the United States did with the telephone, railway and electricity industries (Mark Zuckerberg himself has suggested such an approach). As utilities, they would be guided by a digital licence—just as traditional bricks-and-mortar companies must apply for various licences and permits—defining the rules and regulations of the business model.
Along those lines, the EU should realign its digital-media market according to a more relevant set of values than ‘competition’. In addition to the utility model, other relevant frameworks include ‘product liability’ or a fiduciary ‘duty of care’, in which businesses are bound by a kind of Hippocratic oath and a precautionary principle to ‘first, do no harm’. British authorities have been trying to erect the foundations of this approach; the EU should partner with them.
These companies did not ask permission to start sucking up private data or tracking our physical locations or collecting every ‘like’, ‘share’ and ‘follow’ into psychographic profiles of each user, to be targeted and manipulated by advertisers and unsavoury political actors. They started this data grab secretly, forging their destructive brand of ‘surveillance capitalism’.
Now that we know, should society continue to allow this? Shouldn’t platforms’ default requirement be to obtain users’ permission to collect personal data—opt-in rather than opt-out? Even the highly touted consent provision of the General Data Protection Regulation is riddled with loopholes.
The new utility model should indeed encourage genuine competition, by limiting the mega-scale audience of these media machines. Do most everyday users really need the capacity to reach an audience of millions or even thousands? That’s bigger than most kings and prime ministers were able to reach through most of human history.
The utility model should also restrain the use of specific engagement techniques, such as hyper-targeting of content and ads, automated recommendations, addictive behavioural nudges (such as autoplay and pop-up screens) and the dark patterns and filter bubbles which allow manipulation.
Nothing about the Silicon Valley business model was ordained by divine authority or natural law. It can and should be subjected to democratic control and government regulation. But the EU, like the US, is failing to do that.
At the very least, the EU and member-state governments should update current laws to ensure they apply to the online world. For example, many countries have laws that restrict violence and advertising when it comes to cartoons and other children’s programming on television. Yet Google’s YouTube / YouTubeKids has violated these rules for many years.
With so much at stake, it is not clear why the commission continues to wield the small hammer of its narrow ‘competition’ framework. A focus on competition goes back to the EU presidency of José Luis Barroso, from 2004 to 2014, when the commission’s primary goal was to make the EU ‘the most competitive economy in the world’.
That drive crashed in the 2008 global collapse, followed from 2010 by the eurozone crisis, but its influence survived in Europe’s later obsession with austerity. Now ‘competition’ lingers as a zombie category in the commission’s attempts to create some weak rules for Big Tech.
Ineffective regulations are worse than none at all, since they result in a false sense of governance. The challenge is to establish sensible guardrails for this 21st-century digital infrastructure, so that we can harness the good these technologies provide and greatly mitigate the dangers. As with the promise of the internet itself, the digital-media platforms started out small but exploded into monopolistic giants, which have established greedy rules of their own constituting a threat to our democracy (as well as to free competition).
It is time for the era of self-regulation to end and the commission seems to understand that. But more powerful, democratic frameworks are needed than its outdated, technocratic focus on competition.