The loopholes in the AI Act emerging from trilogue negotiations late on Friday could allow big corporations to slip through.
After intense negotiations, the European Union institutions have reached a provisional agreement on the Artificial Intelligence Act. With the last such trilogue dealing with 21 critical outstanding issues in just 36 hours and details of what was traded remaining unclear, full analysis will have to wait until the final act is published.
The six following pillars of the act are however known.
Risk-based approach: the original risk-based approach of the European Commission is maintained. ‘High-risk’ AI systems—that can harm health, safety, fundamental rights, environment, democracy and the rule of law—are subject to specific requirements. A series of filtering conditions have though been added to ensure that ‘only’ genuine high-risk applications are subject to the legal requirements.
Prohibited uses: AI systems that cause unacceptable risk are banned. This covers those that manipulate human behaviour or exploit individuals’ vulnerabilities, as well as the untargeted scraping of facial images, emotion recognition in the workplace and schools (with a caveat for safety reasons, to recognise for instance if a driver falls asleep), social scoring, biometric categorisation and some cases of predictive policing.
Foundation models: as large systems able to perform a wide range of distinctive tasks, foundation models have to comply with specific transparency obligations before they are placed on the market. Stricter rules have been introduced for ‘high-impact’ foundation models with systemic risk, whose capabilities and performance are well above the average and can disseminate systemic risks along the value chain.
The rules include the need to produce model evaluations, assess and mitigate systemic risks, conduct adversarial testing, ensure cybersecurity and report to the commission on serious incidents and on energy efficiency. Until harmonised EU standards are published, however, developers of general-purpose AI carrying systemic risk may rely on codes of practice to comply with the regulation.
Fundamental rights: ‘AI systems are developed and deployed for harmful and discriminatory forms of state surveillance. AI in law enforcement disproportionately targets already marginalised communities, undermines legal and procedural rights, and enables mass surveillance,’ 16 civil-society organisations recently stated. In response to this pressure, an obligation to conduct a fundamental-rights impact assessment (FRIA) was included in the act. This concerns public bodies and private entities which provide services of general interest (hospitals, schools, banks, insurance companies) and deploy high-risk systems.
Watch the latest episode of Social Europe Podcast
The act provides though for several law-enforcement exemptions, which would allow retrospective and real-time biometric identification to prevent terrorist attacks or locate the victims or suspects of a pre-defined list of serious crimes. An emergency procedure would also allow law-enforcement agencies to deploy in urgent circumstances a high-risk AI tool that had not passed the conformity-assessment procedure.
Enforcement mechanism: violations will be punished withfines that can range from €35 million or 7 per cent of global annual turnover to €7.5 million or 1.5 per cent of turnover for lesser infringements, such as the supply of incorrect information. More ‘proportionate’ capswill however apply to administrative fines for small and medium enterprises and start-ups.
Natural or legal persons may complain to the market-surveillance authority and expect their complaint to be handled in line with the dedicated procedures of that authority. Citizens will also have the right to complain about AI systems and receive explanations about decisions based on high-risk systems which affect their rights.
Governance structure: the act establishes an AI Office and an AI Board. The office will lie within the commission, to oversee the most advanced AI models, foster standards and enforce the rules. A scientific panel of independent experts will advise on foundation models, including high-impact ones.
The board, composed of member states’ representatives, will act as a co-ordination platform and an advisory body to the commission. An advisory forum of stakeholders (industry, SMEs, start-ups, civil society and academia) will furnish technical expertise.
Loopholes in the law
At a news conference late on Friday night, Thierry Breton, the commissioner for the internal market, Carme Artigas Brugal, the secretary of state for digitalisation and AI representing the Spanish presidency and Brando Benifei and Dragos Tudorache, the European Parliament rapporteurs, expressed their conviction that the act balanced the promotion of innovation and the protection of society. Yet several loopholes risk undermining the law’s protective role.
The filter conditions added may allow some high-risk applications to slip beyond its scope. The mandatory FRIAs and the ban on biometric-identification systems are undermined by major exceptions and high-risk AI tools could be deployed with the claim of urgency. Emotion-recognition AI systems in the workplace are to be banned yet allowed for safety reasons—where does safety begin and end, and will workers have a real say?
The hefty fines for violations could be bypassed by utilising the ‘proportionate caps’ qualification: larger companies would be incentivised to entrust startups with their most risky AI projects. Complaints mechanisms remain unclear, beyond references to ‘dedicated procedures’ and the right for citizens to ‘receive explanations’. And the governance structure welcomes societal stakeholders only as providers of technical expertise to a body representing member states, which could dampen the voice of civil society to the point of inaudibility.
Real objective
Breton affirmed: ‘The AI Act is much more than a rule book—it’s a launch pad for EU start-ups and researchers to lead the global AI race.’ This sums up the real objective pursued by the commission and the Council of the EU from the beginning of the legislative process: to develop a legal framework that helps the EU position itself as a global leader in AI, restricts as little as possible the development of the sector and provides maximum support to AI companies operating in the EU.
In short, it is a deregulatory regulation.
Aida Ponce Del Castillo is a senior researcher at the European Trade Union Institute.