The coronavirus crisis demands a regulatory framework for the application of AI to protect public health without jeopardising human rights.
Our world has been shaken by the Covid-19 pandemic, pushing policy-makers to scramble for solutions. And even though the full set of such solutions remains elusive, already a return to normal is debated.
But what will this ‘normal’ be? Powerful forces presume that the world before Covid-19 is the normal to which to return and it falls on progressives to push for new fundamentals—to help formulate a ‘new’ normal. Clearly this is multifaceted and one facet is the role of technology.
Artificial intelligence, as a revolutionary force in restructuring production and consumption patterns, has long been on the agenda of policy-makers. The role of AI, as a creative but disruptive process in the job market, in healthcare, in education—even in shaping our democracies—is undeniable.
Given the health focus of the continuing crisis, overcoming the regulatory, ethical and medical challenges posed by the use of AI in healthcare must be a priority. Defining the framework to do so will be a pivotal initial step in guaranteeing that the new normal produces a fair outcome—that fundamental rights are safeguarded while simultaneously improving healthcare for all.
If supported by adequate and effective regulation, AI promises a wide array of opportunities to improve public health as well as the quality and efficiency of the healthcare sector. Without such a framework, AI has the potential to be just another instrument in a system where rights are sidelined for profit maximisation and biases are reproduced systemically.
Join our growing community newsletter!
"Social Europe publishes thought-provoking articles on the big political and economic issues of our time analysed from a European viewpoint. Indispensable reading!"
Columnist for The Guardian
The Parliamentary Assembly of the Council of Europe (PACE) is preparing a number of reports on the implications of AI. As rapporteur on AI in healthcare, I must point to existing Council of Europe legal instruments—such as the Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine (the Oviedo convention) and the Convention for the Protection of Individuals with regard to the Automatic Processing of Personal Data—as guides for national regulatory efforts.
Tracking and tracing
Clearly AI has played a critical role in the initial detection of the pandemic. It has been used in tracking the spread of disease and hospital capacity, in identifying high-risk patients and in developing drugs and, potentially, a vaccine. Maybe the most visible public debate regarding AI in healthcare has been over ‘testing and tracing’ apps, which have been claimed as important tools to control the spread of the virus and provide valuable information to design strategies for exit from lockdown.
AI’s highly promising potential for the future of public health in Europe is however not the only reality which the pandemic has laid bare. It has offered a stark reminder of socio-economic inequalities—of the need to restrain over-marketisation and regulate markets, and to govern potential conflicts between ethical principles and market forces.
The lasting legacy of neoliberalism is manifested most notably in privatised healthcare and highly precarious job markets. This has aggravated the consequences of the pandemic, particularly for working people, for the unemployed and for the precariat. The unequal social and economic structures established and reinforced under neoliberal hegemony impede our capacity to address the challenges it has thrown up.
We need your help! Please join our mission to improve public policy debates.
As you may know, Social Europe is an independent publisher. We aren't backed by a large publishing house or big advertising partners. For the longevity of Social Europe we depend on our loyal readers - we depend on you. You can support us by becoming a Social Europe member for less than 5 Euro per month.
Thank you very much for your support!
Equally, had there been a trusted and well-defined regulatory framework, maybe AI could have had a much larger positive impact on the coronavirus crisis. The public’s concern regarding the misuse and abuse of data by states, as well as the private sector, would have been mitigated.
We need to set a new framework capable of creating social benefits from AI while safeguarding fundamental rights and democratic governance and ensuring equality. These questions fit snugly into the debate as to what the ‘new’ normal will be: will the means of surveillance for the sake of health purposes accelerate a totalitarian drift or will they be governed by an empowered citizenry? And will isolationist reflexes deepen or will multilateralism, co-operation and solidarity rise to the challenge?
These questions are relevant to any discussion of AI and healthcare—the former to a regulatory framework that will ensure protection of human rights, the latter to whether AI in healthcare will be driven by co-operation and solidarity or, in their absence, profit-seeking objectives.
Evidently, health and personal privacy can never be alternatives—they must go hand in hand. Public trust in the state and the private sector can only prevail if all their agents guarantee basic human rights in developing and using AI.
Given the urgency of doing so in the struggle against the coronavirus, it is of utmost importance to agree on at least a workable basic framework that will enhance trust and make AI operational for the better. And the Covid-19 outbreak has shed light on its critical aspects.
Such a framework should ensure that AI in healthcare empowers citizens in making better-informed decisions and provides information to hold governments accountable for the decisions they make. So that AI does not become instrumental in aggravating inequalities, it should also ensure that data and algorithms are unbiased, and that processes are transparent and inclusive.
It should be based on well-defined liability and a well-balanced public-private dialogue. It should put in place the conditions and guarantees to ensure that pursuing the collective interest does not override individual rights. It should require that technology used for monitoring and tracking is only used temporarily and does not become a permanent feature.
When the new regulatory framework is designed, the point of departure should be recognition of access to healthcare and protection of personal data and privacy as fundamental, indispensable rights. Technology-driven opportunities such as AI should be incorporated into healthcare systems in ways that guarantees equal access while safeguarding those rights. Only then will we not only overcome this pandemic but ensure we are ready to tackle the next one better.