Not just the AI Act but the platform-work directive will be critical for human controls on automated management.

Last month was a watershed moment for the artificial-intelligence revolution which has many of us fascinated. No, it was not a new ChatGTP version from OpenAI or news of its arms race with Google’s Bard. Instead, the effort to regulate AI was seriously enhanced.
Once again, the European Union is proving the frontrunner in regulating Big Tech. The European Parliament agreed its position on the AI Act in June with an almost unanimous vote.
While the last-minute political discussion focused on using biometric data, such as face recognition, to improve security in public spaces, more pressing social issues have to be addressed, such as the social implications of AI in the workplace. As a piece in the New York Times put it, AI will revolutionise work, but nobody agrees on how—and how many jobs will be affected.
Guardrails needed
The EU, as a social superpower, has a lot to lose. Europe’s economy is built on strong industrial relations between workers and employers and a social system that protects workers. This also leaves Europe best-placed to get right the regulation of algorithms and AI in the workplace. And the vote on the AI Act in the parliament came with important news on regulating AI from the Council of the EU.
Member states also finally agreed a position last month on the platform-work directive. While this focuses only on a specific part of the labour force, riders and other service providers in the ‘gig’ economy are at the forefront of automatic management by algorithms. And while about half of this directive deals with the presumption of a labour contract, to stop misuse of the ‘independent’ status of platform workers, the other half goes deep into algorithmic management and what guardrails are needed.
The council favours strict rules on algorithmic management, to empower workers and their representatives to get a grip on the systems being deployed: more transparency, human monitoring, a right to review and, most importantly, rules regarding (psychological) health-and-safety risks. Transparency means that the workers should be informed of automatic management systems—and, specifically, what behaviour is being taken into account and the factors influencing the algorithm’s decisions.
Human monitoring, coupled with the necessary resources, is essential to evaluate the impact of individual choices and entails sharing this evaluation with workers’ representatives, such as unions and works councils. It also requires, without undue delay, an explanation of a significant decision taken by the automatic management system—to clarify the contextual facts, circumstances and reasoning—and the right to ask for a human override, with compensation for associated damage.
The envisaged obligation to evaluate the risks of automated monitoring or decision-making systems to the safety and health of platform workers—especially work-related accidents and psychosocial risks—will represent a big win. The platforms should not only introduce appropriate preventive and protective measures: automated decision-making systems cannot, in any manner, put undue pressure on platform workers or otherwise put at risk their physical and mental health. The seriousness of this is evident from the research showing that road collisions are more likely for takeaway delivery riders working in the gig economy and the reports of accidents involving riders doing ‘quests’ for more cash.
A final measure empowering workers will be the obligation on platforms to inform and consult their representatives on decisions likely to lead to automated monitoring or decision-making systems or changes in their use. Because of the highly technical nature of such systems, trade unions would be able to request the assistance of an expert of their choice to formulate an opinion, with the cost falling on the platform.
Substantial precedent
As of now, this is just the position of the council and we shall have to await the ‘trilogue’ negotiations with the parliament and the European Commission to conclude the directive. But this legislation will set a substantial precedent, with concrete measures for dealing with AI in the workplace outside the platform economy.
The parliament’s proposals on the AI Act do touch on how to deal with high-risk AI deployed at work: it asks for transparency for workers and consultation of workers’ representatives. But further mitigation of algorithmic management’s health-and-safety risks could be sought, borrowing from the proposals to safeguard platform workers. These two files will go into trilogue in the coming months under the Spanish presidency, allowing concepts, wording and agreements to be translated between one file and another.
We can already see how automatic management systems can affect work when we look at platform work. We should use this experience to help steer the impact the AI revolution will have on work and workplaces in more traditional sectors.
One thing is sure: AI will affect many jobs in Europe. And there is a significant opportunity for the EU to set the rules for a human-centric approach, in line with European social values and workers’ rights.
Gerard Rinse Oosterwijk (gerard.oosterwijk@feps-europe.eu) is digital-policy analyst at the Foundation for European Progressive Studies. He has a background in law and economics and has been involved in setting up digital initiatives to promote a democratic space on the internet.