The Digital Omnibus: Deregulation Dressed as Innovation

The EU's sweeping data and AI package loosens safeguards for workers while promising competitiveness gains that will flow mainly to US tech giants.

3rd December 2025

The Digital Omnibus Package recasts EU data and artificial-intelligence law as a tool for AI competitiveness. Presented as a neutral technical simplification to unlock the power of data, it is in practice a deregulatory exercise that will sideline social and labour objectives and weaken protections for workers.

The package restructures the EU data acquis by consolidating instruments into a single Data Act and loosening safeguards in the General Data Protection Regulation (GDPR), the AI Act and related laws, while launching an EU Data Union Strategy. Its evidential base is thin, with no substantiated impact assessment or risk analysis. As Mariniello, Alemanno and NOYB rightly argue, the Digital Omnibus represents a constitutional shift. Yet this shift has been introduced without systematic assessment of its consequences for workers and other vulnerable users.

Workers bear the burden

The Digital Omnibus is a horizontal regulation, many of the amendments it introduces to existing legislation will directly have an impact on workplaces that use AI-assisted monitoring and algorithmic management. In Recital 36, it labels employment a “data-intensive” field excluded from the simplified information regime it creates. In principle, this preserves full transparency guarantees under Article 13 GDPR and prevents employers from invoking those derogations. The Omnibus, however, adds no specific labour safeguards, and the cumulative effect of the changes still favours employers.

A new derogation to Article 9 GDPR permits residual special-category data in AI datasets, provided controllers avoid collecting such data, delete them where possible and otherwise prevent them from influencing outputs or being disclosed. Together with the new Article 4a AI Act, which exceptionally authorises processing of special-category data for bias detection and correction, this normalises the re-use of worker-related sensitive data for AI research, development, training and operation, within safeguards largely internal to controllers. At the same time, the Omnibus clarifies personal data for a given entity by stating that information is not personal data where that entity lacks means reasonably likely to identify the person concerned. Analytics providers handling pseudonymised worker datasets without a re-identification key may therefore consider their processing non-personal data and outside the GDPR, even though the same datasets remain personal data for employers.

The new Article 88c GDPR allows processing of personal data to develop and operate an AI system or model on the basis of legitimate interests under Article 6(1)(f), unless Union or national law requires consent, subject to safeguards such as data minimisation, enhanced transparency and an unconditional right to object. This consolidates employers’ reliance on legitimate interests for wide use of HR records, log files and communication data to build and run AI systems for scheduling, performance assessment and monitoring.

Workers’ ability to access information and resist automated decisions is also affected. In GDPR Article 12, the notion of an “excessive request” is broadened so that controllers may refuse or charge a fee for requests they regard as abusive, repetitive or overly broad and undifferentiated. In the workplace, this widens employers’ room to narrow or decline individual and collective access requests, including those supported by trade unions, reducing the effectiveness of Article 15 GDPR as the main transparency tool for workers. The Omnibus also clarifies that decisions based solely on automated processing are permitted where they are considered necessary for entering into, or performance of, a contract, even if a human decision would also be possible. Employers can thus frame automated monitoring, rating or disciplinary systems as contractually necessary default tools without any strengthening of transparency or contestability safeguards for affected workers.

Further changes weaken risk management and AI literacy obligations. Controllers must now notify only breaches likely to result in a high risk to the rights and freedoms of natural persons, aligning the threshold for notification to supervisory authorities with that for communication to data subjects. Employers therefore face fewer notification obligations, and workers may be more exposed to HR data leaks that are recorded internally but never notified externally. In parallel, the obligation on providers and deployers to ensure a sufficient level of AI literacy among their staff is replaced by a duty on the Commission and member states merely to encourage them to do so, removing a concrete legal hook for workers and unions to demand AI-literacy measures.

The high-risk classification system is also diluted. Article 6(4) AI Act is amended so that a provider who considers an AI system listed in Annex III not high-risk must document that assessment before placing the system on the market or putting it into service and provide the documentation to competent authorities upon request, while exempted Annex III systems no longer need to be registered in the EU high-risk database. Providers can therefore opt out of high-risk status with reduced ex ante visibility for workers and their representatives.

One apparent safeguard deserves mention: controllers whose activities are not data-intensive and whose processing is limited to small amounts of personal data for simple, non-complex purposes (craftspeople, associations or local sports clubs), may benefit from a simplified information regime and, under certain conditions, dispense with individual privacy notices. Employment contexts are explicitly defined as data-intensive, so employers relying on data-driven or algorithmic management should not be able to invoke this derogation.

A false promise of innovation

The Omnibus is presented as an innovation and competitiveness measure that will boost AI through simpler rules: lighter regulation is expected to foster innovation and diffuse its benefits to society. Competitiveness is framed as a collective good. Yet in a market where EU firms attract far less AI investment than their US competitors, the Digital Omnibus is unlikely to benefit European industry, workers or civil society, and large US-based providers are likely to be the main winners.

Ultimately, the Digital Omnibus is not a simple technical clean-up but a deregulatory exercise that relaxes regulatory constraints for incumbent providers and shifts the costs of its competitiveness agenda onto workers and society.

Author Profile
Pics

Aida Ponce Del Castillo is a senior researcher at the European Trade Union Institute.

New publications by our partners Harvard University Press

Membership Ad Preview

Help Keep Social Europe Free for Everyone

We believe quality ideas should be accessible to all — no paywalls, no barriers. Your support keeps Social Europe free and independent, funding the thought leadership, opinion, and analysis that sparks real change.

Social Europe Supporter
€4.75/month

Help sustain free, independent publishing for our global community.

Social Europe Advocate
€9.50/month

Go further: fuel more ideas and more reach.

Social Europe Champion
€19/month

Make the biggest impact — help us grow, innovate, and amplify change.

Previous Article

Poland Shows Hungary How Grassroots Democracy Can Defeat Authoritarian Drift

Next Article

Why European Universities Must Not Follow America’s