Stronger legislation than the European Commission envisages is needed to regulate AI and protect workers.
Artificial intelligence (AI) is of strategic importance for the European Union: the European Commission frequently affirms that ‘artificial intelligence with a purpose can make Europe a world leader’. Recently, the commissioner for the digital age, Margrethe Vestager, again insisted on AI’s ‘huge potential’ but admitted there was ‘a certain reluctance’, a hesitation on the part of the public: ‘Can we trust the authorities that put it in place?’ One had to be able to trust in technology, she said, ‘because this is the only way to open markets for AI to be used’.
Trust is indeed central to the acceptance of AI by European citizens. The recent toeslagenaffaire (allowances affair) in the Netherlands is a reminder of the dangers. Tens of thousands of families were flagged up as potentially fraudulent claimants of childcare allowances, without any proof, and forced to pay back—driving many into poverty, some to depression and suicide. All of this was the consequence of a self-learning algorithm and AI system, designed without checks and balances and not subject to human scrutiny.
In its current form, the AI regulation proposed by the commission last April will not protect citizens from similar dangers. It will also not protect workers. In its eagerness to push AI forward and position itself in the global AI race, the commission has overlooked workers’ rights. The envisaged AI legislation is framed in terms of product safety and, as such, employment is not within its legal ambit.
The only reference to employment is found in annex III, which lists ‘high-risk’ AI systems. These take in recruitment and selection, the screening and evaluation of candidates, and the elevation or termination of work-related contractual relationships and task allocation, as well as the monitoring and evaluation of the performance and behaviour of persons in such relationships.
The regulation would not however provide any additional specific protection to workers nor ensure their existing rights were safeguarded—despite the uncertainty AI will generate in these regards. The protective capacity for workers of the General Data Protection Regulation (GDPR), although in force for almost four years and despite its potential, is not yet used to the full.
Shortcomings to address
Along with other emerging technologies, such as quantum computing, robotics or blockchain, AI will disrupt life as we know it. The EU can become an AI global leader only if it remains faithful to its democratic and social values, which implies protecting the rights of its workers.
To do that, the shortcomings of the AI regulation and the GDPR need to be addressed. Seven aspects deserve more attention:
Implementing GDPR in the context of employment: fully implementing GDPR rights for workers is one of the most effective ways to ensure they have control over their data. AI relies on data, including workers’ personal data. Workers need to use GDPR actively, to ask how such data is used (potentially for profiling or to discriminate against them), stored or shared, in and out of the employment relationship; employers need to respect their right to do so. The commission and the European Data Protection Supervisor should issue clear recommendations insisting on the applicability of the GDPR at work. There may also be a need to determine what role labour inspectors could or should play.
Further developing the ‘right to explanation’: when decisions supported by algorithms—the processing of sensitive data, performance assessment, task allocation based on reputational data, profiling and so on—negatively affect workers or are associated with a bias (in the design or the data), the right to explanation becomes an essential defence mechanism. A specific framework based on articles 12-15, 22 and recital 71 of the GDPR must be developed and apply to all forms of employment. In practice, when a decision supported by an algorithm has been made and negatively affects a worker, such framework should enable the individual to obtain information that is simultaneously understandable, meaningful and actionable; receive an explanation as to the logic behind the decision; understand the significance and the consequences of the decision, and challenge the decision, vis-à-vis the employer or in court if necessary.
Purpose of AI algorithms: in an occupational setting, having access to the code behind an algorithm is not useful per se. What matters to workers is understanding the purpose of the AI system or the algorithm embedded in an application. This is partly covered by GDPR article 35, on the obligation to produce data-protection impact assessments. Further action is however needed to make sure workers’ representatives are involved.
Involving workers’ representatives when conducting AI risk assessments at work, pre-deployment: Given the potential risk of misuse, as well as of unintended or unanticipated harmful outcomes stemming from AI systems, employers should have the obligation under the proposed regulation to conduct technology risk assessments before their deployment. Workers’ representatives should be systematically involved and have a role in characterising the level of risk arising from the use of AI systems and in identifying proportionate mitigation measures, all along the life cycle. Risk assessments should address general issues about cybersecurity, privacy and safety, as well as specific associated threats.
Addressing intrusive surveillance: workplace monitoring is increasingly being replaced by intrusive surveillance, using data related to workers’ behaviour, biometrics and emotions. Given the risk of abuse, legal provisions are needed to ban such practices.
Boosting workers’ autonomy in human-machine interactions: this entails ensuring that workers are ‘in the loop’ of fully or semi-automated decision-making and that they make the final decision, using the input from the machine. This is particularly important when joint (human-machine) problem-solving takes place. Boosting workers’ autonomy means sustaining the accumulated tacit knowledge of the workforce and supporting the transfer of that knowledge to the machine—whether it be a co-operative robot or a piece of software. This is particularly pertinent to processes that require testing, quality control or diagnosis.
Enabling workers to become ‘AI literate’: acquiring technical skills and using them ‘at work’, although necessary, is not enough and mostly serves the interests of one’s employer. Becoming ‘AI literate’ means being able to understand critically the role of AI and its impact on one’s work and occupation, and being able to anticipate how it will transform one’s career and role. Passively using AI systems does not benefit workers themselves—a certain distance needs to be established for them to see AI’s overall influence. There is scope here for a new role for workers’ representatives to flag up digitally-related risks and interactions, to assess the uncertain impact of largely invisible technologies and to find new ways of effectively integrating tacit knowledge into workflows and processes.
Two scenarios
In the negotiations over the AI regulation, two possible scenarios have emerged. The first revolves around adding ‘protective’ amendments to the text. This may not be enough, as significant fixes are required to extend its legal purview and make substantial changes to its scope.
The second scenario involves adopting complementary rules on AI for the workplace. These would add to the GDPR and the commission’s draft directive on improving working conditions in platform work, in particular when it comes to algorithmic management.
As the Dutch toeslagenaffaire has shown, algorithms can have a direct and damaging impact on people and on workers’ lives. For trust ever to exist, the AI act must be reorientated: its current focus is on enabling business and promoting the EU as a global AI leader, when the priority should be to protect citizens and workers.
Aida Ponce Del Castillo is a senior researcher at the European Trade Union Institute.