Social Europe

politics, economy and employment & labour

  • Projects
    • Corporate Taxation in a Globalised Era
    • US Election 2020
    • The Transformation of Work
    • The Coronavirus Crisis and the Welfare State
    • Just Transition
    • Artificial intelligence, work and society
    • What is inequality?
    • Europe 2025
    • The Crisis Of Globalisation
  • Audiovisual
    • Audio Podcast
    • Video Podcasts
    • Social Europe Talk Videos
  • Publications
    • Books
    • Dossiers
    • Occasional Papers
    • Research Essays
    • Brexit Paper Series
  • Shop
  • Membership
  • Ads
  • Newsletter

Explaining artificial intelligence in human-centred terms

by Martin Schüßler on 24th June 2020

TwitterFacebookLinkedIn

This series is a partnership with the Weizenbaum Institute for the Networked Society and the Friedrich Ebert Stiftung

Since AI involves interactions between machines and humans—rather than just the former replacing the latter—’explainable AI’ is a new challenge.

explainable AI, XAI
Martin Schüßler

Intelligent systems, based on machine learning, are penetrating many aspects of our society. They span a large variety of applications—from the seemingly harmless automation of micro-tasks, such as the suggestion of synonymous phrases in text editors, to more contestable uses, such as in jail-or-release decisions, anticipating child-services interventions, predictive policing and many others.

Researchers have shown that for some tasks, such as lung-cancer screening, intelligent systems are capable of outperforming humans. In many other cases, however, they have not lived up to exaggerated expectations. Indeed in some, severe harm has eventuated—well-known examples are the COMPAS system used in some US states to predict reoffending, held to be racially-biased (although that study was methodologically criticised), and several fatalities involving Tesla’s autopilot.

Make your email inbox interesting again!

"Social Europe publishes thought-provoking articles on the big political and economic issues of our time analysed from a European viewpoint. Indispensable reading!"

Polly Toynbee

Columnist for The Guardian

Thank you very much for your interest! Now please check your email to confirm your subscription.

There was an error submitting your subscription. Please try again.

Powered by ConvertKit

Black boxes

Ensuring that intelligent systems adhere to human values is often hindered by the fact that many are perceived as black boxes—they thus elude human understanding, which can be a significant barrier for their adoption and safe deployment. Over recent years there has been increasing public pressure for intelligent systems ‘to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made’. It has even been debated whether explanations of automated systems might be legally required.

Explainable artificial intelligence (XAI) is an umbrella term which covers research methods and techniques that try to achieve this goal. An explanation can be seen as a process, as well as a product: it describes the cognitive process of identifying causes of an event. At the same time, it is often a social process between an explainer (sender of an explanation) and an explainee (receiver of an explanation), with the goal to transfer knowledge.

Much work on XAI is centred on what is technically possible to explain and explanations usually cater for AI experts. But this has been aptly characterised as ‘the inmates running the asylum’, because many stakeholders are left out of the loop. While it is important that researchers and data scientists are able to investigate their models, so that they can verify that they generalise and behave as intended—a goal far from being achieved—many other situations may require explanations of intelligent systems, and to many others.

Many intelligent systems will not replace human occupations entirely—the fear of full automation and eradication of jobs is as old as the idea of AI itself. Instead, they will automate specific tasks previously undertaken (semi-)manually. Consequently, the interaction of humans with intelligent systems will be much more commonplace. Human input and human understanding are prerequisites for the creation of intelligent systems and the unfolding of their full potential.

Human-centred questions

So we must take a step back and ask more values- and human-centred questions. What explanations do we need as a society? Who needs those explanations? In what context is interpretability a requirement? What are the legal grounds to demand an explanation?

We also need to consider the actors and stakeholders in XAI. A loan applicant requires a different explanation than a doctor in an intensive-care unit. A politician introducing a decision-support system for a public-policy problem should receive different explanations than a police officer planning a patrol with a predictive-policing tool. Yet what incentive does a model provider have to provide a convincing, trust-enhancing justification, rather than a merely accurate account?


We need your help! Please support our cause.


As you may know, Social Europe is an independent publisher. We aren't backed by a large publishing house, big advertising partners or a multi-million euro enterprise. For the longevity of Social Europe we depend on our loyal readers - we depend on you.

Become a Social Europe Member

As these open questions show, there are countless opportunities for non-technical disciplines to contribute to XAI. There is however little such collaboration, though much potential. For example, participatory design is well equipped to create intelligent systems in a way that takes the needs of various stakeholders into account, without requiring them to be data-literate. And the methods of social science are well suited to develop a deeper understanding of the context, actors and stakeholders involved in providing and perceiving explanations.

Evaluating explanations

A specific instance where disciplines need to collaborate to arrive at practically applicable scientific findings is the evaluation of explanation techniques themselves. Many have not been evaluated and most of the evaluations which have been conducted have been functional or technical, which is problematic because most scholars agree that ‘there is no formal definition of a correct or best explanation’.

At the same time, the conduct of human-grounded evaluations is challenging because no best practices yet exist. The few existing studies have often found surprising results, which emphasises their importance.

One study discovered that explanations led to a decrease in perceived system performance—perhaps because they disillusioned users who came to understand that the system was not making its predictions in an ‘intelligent’ manner, even though these were accurate. In the same vein, a study conducted by the author indicated that salience maps—a popular and heavily marketed technique for explaining image classification—provided very limited help for participants to anticipate classification decisions by the system.

Many more studies will be necessary to assess the practical effectiveness of explanation techniques. Yet it is very challenging to conduct such studies, as they need to be informed by real-world uses and the needs of actual stakeholders. These human-centered dimensions remain underexplored. The need for such scientific insight is yet another reason why we should not leave XAI research to technical scholars alone.

TwitterFacebookLinkedIn
Home ・ Explaining artificial intelligence in human-centred terms

Filed Under: Politics Tagged With: AI: society

About Martin Schüßler

Martin Schüßler is a Phd candidate at TU Berlin, working at the
interdisciplinary Weizenbaum Institute.

Partner Ads

Most Recent Posts

Thomas Piketty,capital Capital and ideology: interview with Thomas Piketty Thomas Piketty
pushbacks Border pushbacks: it’s time for impunity to end Hope Barker
gig workers Gig workers’ rights and their strategic litigation Aude Cefaliello and Nicola Countouris
European values,EU values,fundamental values European values: making reputational damage stick Michele Bellini and Francesco Saraceno
centre left,representation gap,dissatisfaction with democracy Closing the representation gap Sheri Berman

Most Popular Posts

sovereignty Brexit and the misunderstanding of sovereignty Peter Verovšek
globalisation of labour,deglobalisation The first global event in the history of humankind Branko Milanovic
centre-left, Democratic Party The Biden victory and the future of the centre-left EJ Dionne Jr
eurozone recovery, recovery package, Financial Stability Review, BEAST Light in the tunnel or oncoming train? Adam Tooze
Brexit deal, no deal Barrelling towards the ‘Brexit’ cliff edge Paul Mason

Other Social Europe Publications

Whither Social Rights in (Post-)Brexit Europe?
Year 30: Germany’s Second Chance
Artificial intelligence
Social Europe Volume Three
Social Europe – A Manifesto

ETUI advertisement

Benchmarking Working Europe 2020

A virus is haunting Europe. This year’s 20th anniversary issue of our flagship publication Benchmarking Working Europe brings to a growing audience of trade unionists, industrial relations specialists and policy-makers a warning: besides SARS-CoV-2, ‘austerity’ is the other nefarious agent from which workers, and Europe as a whole, need to be protected in the months and years ahead. Just as the scientific community appears on the verge of producing one or more effective and affordable vaccines that could generate widespread immunity against SARS-CoV-2, however, policy-makers, at both national and European levels, are now approaching this challenging juncture in a way that departs from the austerity-driven responses deployed a decade ago, in the aftermath of the previous crisis. It is particularly apt for the 20th anniversary issue of Benchmarking, a publication that has allowed the ETUI and the ETUC to contribute to key European debates, to set out our case for a socially responsive and ecologically sustainable road out of the Covid-19 crisis.


FREE DOWNLOAD

Eurofound advertisement

Industrial relations: developments 2015-2019

Eurofound has monitored and analysed developments in industrial relations systems at EU level and in EU member states for over 40 years. This new flagship report provides an overview of developments in industrial relations and social dialogue in the years immediately prior to the Covid-19 outbreak. Findings are placed in the context of the key developments in EU policy affecting employment, working conditions and social policy, and linked to the work done by social partners—as well as public authorities—at European and national levels.


CLICK FOR MORE INFO

Foundation for European Progressive Studies Advertisement

Read FEPS Covid Response Papers

In this moment, more than ever, policy-making requires support and ideas to design further responses that can meet the scale of the problem. FEPS contributes to this reflection with policy ideas, analysis of the different proposals and open reflections with the new FEPS Covid Response Papers series and the FEPS Covid Response Webinars. The latest FEPS Covid Response Paper by the Nobel laureate Joseph Stiglitz, 'Recovering from the pandemic: an appraisal of lessons learned', provides an overview of the failures and successes in dealing with Covid-19 and its economic aftermath. Among the authors: Lodewijk Asscher, László Andor, Estrella Durá, Daniela Gabor, Amandine Crespy, Alberto Botta, Francesco Corti, and many more.


CLICK HERE

Social Europe Publishing book

The Brexit endgame is upon us: deal or no deal, the transition period will end on January 1st. With a pandemic raging, for those countries most affected by Brexit the end of the transition could not come at a worse time. Yet, might the UK's withdrawal be a blessing in disguise? With its biggest veto player gone, might the European Pillar of Social Rights take centre stage? This book brings together leading experts in European politics and policy to examine social citizenship rights across the European continent in the wake of Brexit. Will member states see an enhanced social Europe or a race to the bottom?

'This book correctly emphasises the need to place the future of social rights in Europe front and centre in the post-Brexit debate, to move on from the economistic bias that has obscured our vision of a progressive social Europe.' Michael D Higgins, president of Ireland


MORE INFO

Hans Böckler Stiftung Advertisement

The macroeconomic effects of the EU recovery and resilience facility

This policy brief analyses the macroeconomic effects of the EU's Recovery and Resilience Facility (RRF). We present the basics of the RRF and then use the macroeconometric multi-country model NiGEM to analyse the facility's macroeconomic effects. The simulations show, first, that if the funds are in fact used to finance additional public investment (as intended), public capital stocks throughout the EU will increase markedly during the time of the RRF. Secondly, in some especially hard-hit southern European countries, the RRF would offset a significant share of the output lost during the pandemic. Thirdly, as gains in GDP due to the RRF will be much stronger in (poorer) southern and eastern European countries, the RRF has the potential to reduce economic divergence. Finally, and in direct consequence of the increased GDP, the RRF will lead to lower public debt ratios—between 2.0 and 4.4 percentage points below baseline for southern European countries in 2023.


FREE DOWNLOAD

About Social Europe

Our Mission

Article Submission

Legal Disclosure

Privacy Policy

Copyright

Social Europe ISSN 2628-7641

Find Social Europe Content

Search Social Europe

Project Archive

Politics Archive

Economy Archive

Society Archive

Ecology Archive

.EU Web Awards