‘Surveillance capitalism’ can only be regulated on at least a European scale—and the mood for change is growing.
We are living in an era of Big Tech dominance, which has issued in an economy described by Shoshana Zuboff as ‘surveillance capitalism’. Much needs to change for the digital world to respect citizens’ rights, instead of catering only to ever-expanding greed.
The European Union—as other regulatory global regions—is set to change the status quo by proposing a set of new laws and rules. These attempt, on the one hand, to reform the way in which online platforms can be held accountable for their actions (the Digital Services Act) and, on the other, to impose stricter rules on platforms which abuse their position as digital gatekeepers to avoid competition (the Digital Markets Act).
Will these rules, the DSA and DMA, be enough to tame Big Tech?
Handful of corporations
As we rely on technology to live, work, study or secure essential services, the power of a handful of corporations is ever-growing. While Facebook mediates the social lives and personal messages of over three billion people, Google has taken control of mobile devices, email inboxes and search queries. Delivery companies now use algorithms to send orders to their drivers—or automatically fire them if the articifial intelligence determines they under-perform—and Amazon not only sells books but also centralises most of the world’s cloud-computing capacity.
Our job is keeping you informed!
Subscribe to our free newsletter and stay up to date with the latest Social Europe content. We will never send you spam and you can unsubscribe anytime.
When the pandemic hit, Big Tech was ready to ‘help’ in the preparation of Covid-19 tracing applications and Google tried to dominate even more public school systems—all of this ‘for free’. Some companies, such as Amazon, are so powerful that they are not only monopolies but exercise ‘digital feudalism’: everyone and everything belongs to, or depends on, the same technology overlord.
The surveillance-based business model of the dominant technology companies is based on extracting as much personal information and profiling as possible to target individuals, on- and offline. Over time, Big Tech corporations build a frighteningly detailed picture about billions of individuals—and that knowledge directly translates into (market) power.
Personal data extraction
This enables the platforms to ensure users stay connected. The longer eyes are glued to the screen, the more personalised advertisements can be displayed and the more personal data can be extracted (and profit accumulated).
Platforms call this ‘engagement’ but in reality it’s just viewers consuming more adverts. Companies such as Facebook are incentivised to keep rival groups of users ‘engaged’ by polarising society with misinformation and other content which undermines democracy.
This is compounded by lock-in: an individual may have joined Facebook out of interest but if she leaves she will lose all her connections there. ‘I have to be on this platform because everybody else is’ is to economists a ‘network effect’—a powerful lever for large platforms to cement their dominance.
We arrived at this point because of the neoliberal wave which led to politicians on both sides of the Atlantic deciding not to regulate internet companies—companies which in some cases started and grew with substantial public funding. The tide has finally turned, though, as citizens, non-governmental organisations, academics, international institutions and policy-makers alike have seen the undeniable impacts of unregulated private companies on human rights and are becoming increasingly active on the issue. Even national and transnational administrations—especially the EU—see there is a need to stop this madness.
European Digital Rights (EDRi) suggests three types of solution, which would drastically improve the digital-platform environment for the benefit of all.
Introduce strong regulation and enforcement to phase out the surveillance-based business model: from the Tracking Free Ads Coalition in the European Parliament and Stop Stalker Ads to the global Ban Surveillance Advertising and the European Data Protection Supervisor, there is a broad understanding that targeted online advertising should be privacy-preserving and human-centred. It should not enable technology corporations to manipulate public debate—or allow others to do so. The goal is to tackle the systemic sources of a business model which promotes the circulation of harmful and polarising content and misinformation.
We need your support
Social Europe is an independent publisher and we believe in freely available content. For this model to be sustainable, however, we depend on the solidarity of our readers. Become a Social Europe member for less than 5 Euro per month and help us produce more articles, podcasts and videos. Thank you very much for your support!
Regulate such systemic problems of the platform economy, not online speech: a holistic, human-rights approach to content moderation should ensure safe participation and expression for all. While it might seem logical that those who profit from harmful content ‘going viral’ should be responsible for its impact, the EU should not create regulatory incentives for platforms to remove legitimate content. Human-rights defenders often rely on platforms to expose all sorts of atrocities but trusting these companies with yet more unregulated power over content would lead to bigger problems—as evidenced by the silencing of Palestinian activists on ‘social media’ and of political activists in Europe. While we encourage transparent reporting and access to remedies for victims of harm, we also advocate a rule-of-law approach which speedily helps victims yet protects legitimate speech.
Empower users and promote the re-decentralisation of the web: strong competition laws should be put in place and the commission should enforce them. Tomasso Valletti, chief competition economist at the commission from 2016 until 2019, exposed the shocking inaction on competition in the Big Tech sector: ‘GAFAM (Google, Amazon, Facebook, Apple, Microsoft) have acquired more than 1,000 firms in the past 20 years, and zero of those transactions have been blocked—and 97 per cent were not even assessed by anybody.’ Indeed, many of these recent mergers have created huge privacy and data-protection concerns: Google and FitBit and Facebook and WhatsApp are two of the best-known examples.
In addition, individuals should be empowered to leave privacy-invasive, gatekeeper platforms for equivalent services without losing access to connections, preferences and shared content. One of EDRi’s core demands on the DMA and DSA is mandatory inter-operability (so individuals can use alternatives to Big Tech platforms and operating systems) for gatekeepers or very large platforms. Contrary to the original commission proposal, we insist that mandated inter-operability should include core services, such as third-party content moderation and recommender systems.
Overwhelming public support
The DSA and the DMA could become two cornerstones of a radically-changed environment for how citizens use technology and are affected by it. By adding stronger enforcement of the General Data Protection Regulation (GDPR), a robust ePrivacy Regulation and the envisaged Artificial Intelligence Act, the impact on the regulation of digital technologies could be profound.
Change is at hand, the public are overwhelmingly in support and policy-makers are coming to see why it is necessary too. All those striving for a more equal, just and fair society need to make this happen now.