Europe’s Digital Future Demands Public Ownership And Worker Control

European AI strategy must prioritise democratic governance over market solutions to protect citizens' data and workers' rights.

26th September 2025

Europe stands at a critical juncture in its digital transformation, particularly as artificial intelligence (AI) enters public services. While AI promises to enhance efficiency and service delivery, legitimate concerns about data sovereignty, cybersecurity and eroding public trust demand a strategic response. The European Union’s “Apply AI Strategy” must prioritise public interest over market-driven solutions and ensure democratic governance if it is to succeed.

The handling of sensitive data presents the most immediate challenge. Public authorities collect and manage highly confidential information about citizens’ mobility, health, judicial records and education. Without careful management, AI tools in these sectors risk inadvertently feeding sensitive data into Large Language Models (LLMs), creating significant cybersecurity vulnerabilities that could threaten national security itself.

The ongoing debate around the US Cloud Act and its implications for European privacy—evidenced by concerns raised in the French Senate—underscores an urgent need for robust digital sovereignty. Public concerns about tech giants’ access to data, exemplified by Palantir’s expanding role in German policing, highlight why strict oversight and accountability matter more than ever.

The Apply AI Strategy must therefore include concrete proposals for establishing publicly owned data infrastructure, including European public clouds with democratic governance and oversight. Relying solely on market-based solutions, as some current proposals suggest, falls dangerously short. As former European Central Bank (ECB) president Mario Draghi wisely observed, certain crucial areas cannot be left to market forces alone.

The EU possesses a strong tradition of public enterprise. Leveraging this expertise to promote public investment, management and ownership of digital infrastructure—including worker-owned cloud services—would significantly reduce dependence on private, often foreign-owned technologies. This approach also empowers workers, ensuring they control technologies that affect their livelihoods.

Against the current wave of deregulation, AI implementation must align with existing regulatory frameworks designed to protect citizens. While some advocate pausing implementation of the AI Act for high-risk models, this would prove counterproductive for building trust. Full and timely implementation of the AI Act remains paramount to ensuring that AI technologies are developed and deployed ethically, transparently and with due regard for fundamental rights. We need strict oversight, not relaxed regulatory measures.

The success of any AI strategy depends on engaging the public servants who will use these technologies daily. Civil servants, nurses, teachers and other public service professionals consistently report lacking involvement in testing, let alone designing, such systems. This oversight significantly undermines the trust essential for effective technology adoption.

The Apply AI Strategy must therefore include legislative initiatives that regulate worker involvement, providing information, consultation and bargaining rights across all sectors, especially within public services. An EU-level agreement between trade unions and central government administrations demonstrates how direct involvement of workers and their unions in shaping digitalisation strategies benefits both citizens and employees. This agreement states that AI tools should serve the common good while improving working conditions and public services. Autonomy for workers and administrations proves crucial. Employers and unions stress the “human in command” and “human in control” principles, necessitating that employers negotiate collective agreements to guide implementation of digital technologies and AI. The unions and employers have asked the European Commission to transform this agreement into legislation—a decision remains pending as the second phase consultation on the right to disconnect and telework continues.

Beyond engagement, the strategy must address AI’s broader implications for the workforce, particularly given existing staffing shortages across the economy. While debate continues about AI replacing public service workers, we face an undeniable need for skilled IT staff, adequate training for existing personnel and adapted continuous professional development programmes.

The Health at a Glance: Europe report estimates a current shortage of 1.2 million health workers—a gap we expect will only grow. These technologies must also align with goals to drastically reduce carbon dioxide emissions while considering other environmental impacts. A recent report by ECOS and Open Future offers concrete suggestions about how expanding data centres may derail climate goals.

Workers must also ensure they control algorithmic management technologies—computer-programmed procedures for managing workers. These technologies increasingly pervade the European economy, with an Organisation for Economic Co-operation and Development (OECD) survey this year finding that 79 per cent of workplaces have adopted at least one algorithmic management tool. Our newly released guide on algorithmic management explains how workers and unions can control and limit these systems to prevent algorithms from facilitating exploitation. The Apply AI Strategy must comprehensively address these interconnections between technologies and their environmental and social implications.

For Europe’s Apply AI Strategy to succeed and foster trust, it must build on pillars of public ownership, democratic governance of digital infrastructure, sustainability and robust worker involvement. This means prioritising development of European public cloud services, ensuring full implementation of the AI Act and guaranteeing that social dialogue and collective agreements sit at the core of digital transformation.

Only by integrating these crucial elements can Europe ensure that AI technologies serve the public good, uphold fundamental rights and contribute to a more secure and sovereign digital future. The choice before us is clear: either we shape technology to serve democratic values and workers’ interests, or we allow market forces to determine our digital destiny. For Europe’s citizens and workers, only one option is acceptable.

  This post is sponsored by EPSU
Author Profile

Jan Willem Goudriaan has been general secretary of the European Federation of Public Service Unions (EPSU) since 2014.

Author Profile

Diego Naranjo is a Spanish lawyer and activist. From 2014 to 2024 he was the Head of Policy of EDRi, the European Digital Rights network. He now works as an independent expert.

Featured publications by Harvard University Press

Membership Ad Preview

Help Keep Social Europe Free for Everyone

We believe quality ideas should be accessible to all — no paywalls, no barriers. Your support keeps Social Europe free and independent, funding the thought leadership, opinion, and analysis that sparks real change.

Social Europe Supporter
€4.75/month

Help sustain free, independent publishing for our global community.

Social Europe Advocate
€9.50/month

Go further: fuel more ideas and more reach.

Social Europe Champion
€19/month

Make the biggest impact — help us grow, innovate, and amplify change.

Previous Article

Europe Needs Active Demand Management, Not Business As Usual

Next Article

Trump Puts America Up for Sale: Everything Now Comes With a Price Tag