Social Europe

politics, economy and employment & labour

  • Themes
    • Global cities
    • Strategic autonomy
    • War in Ukraine
    • European digital sphere
    • Recovery and resilience
  • Publications
    • Books
    • Dossiers
    • Occasional Papers
    • Research Essays
    • Brexit Paper Series
  • Podcast
  • Videos
  • Newsletter
  • Membership

Explaining artificial intelligence in human-centred terms

Martin Schüßler 24th June 2020

This series is a partnership with the Weizenbaum Institute for the Networked Society and the Friedrich Ebert Stiftung

Since AI involves interactions between machines and humans—rather than just the former replacing the latter—’explainable AI’ is a new challenge.

explainable AI, XAI
Martin Schüßler

Intelligent systems, based on machine learning, are penetrating many aspects of our society. They span a large variety of applications—from the seemingly harmless automation of micro-tasks, such as the suggestion of synonymous phrases in text editors, to more contestable uses, such as in jail-or-release decisions, anticipating child-services interventions, predictive policing and many others.

Researchers have shown that for some tasks, such as lung-cancer screening, intelligent systems are capable of outperforming humans. In many other cases, however, they have not lived up to exaggerated expectations. Indeed in some, severe harm has eventuated—well-known examples are the COMPAS system used in some US states to predict reoffending, held to be racially-biased (although that study was methodologically criticised), and several fatalities involving Tesla’s autopilot.

Black boxes

Ensuring that intelligent systems adhere to human values is often hindered by the fact that many are perceived as black boxes—they thus elude human understanding, which can be a significant barrier for their adoption and safe deployment. Over recent years there has been increasing public pressure for intelligent systems ‘to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made’. It has even been debated whether explanations of automated systems might be legally required.

Explainable artificial intelligence (XAI) is an umbrella term which covers research methods and techniques that try to achieve this goal. An explanation can be seen as a process, as well as a product: it describes the cognitive process of identifying causes of an event. At the same time, it is often a social process between an explainer (sender of an explanation) and an explainee (receiver of an explanation), with the goal to transfer knowledge.

Much work on XAI is centred on what is technically possible to explain and explanations usually cater for AI experts. But this has been aptly characterised as ‘the inmates running the asylum’, because many stakeholders are left out of the loop. While it is important that researchers and data scientists are able to investigate their models, so that they can verify that they generalise and behave as intended—a goal far from being achieved—many other situations may require explanations of intelligent systems, and to many others.


Become a Social Europe Member


Support independent publishing and progressive ideas by becoming a Social Europe member for less than 5 Euro per month. Your support makes all the difference!


Click here to become a member

Many intelligent systems will not replace human occupations entirely—the fear of full automation and eradication of jobs is as old as the idea of AI itself. Instead, they will automate specific tasks previously undertaken (semi-)manually. Consequently, the interaction of humans with intelligent systems will be much more commonplace. Human input and human understanding are prerequisites for the creation of intelligent systems and the unfolding of their full potential.

Human-centred questions

So we must take a step back and ask more values- and human-centred questions. What explanations do we need as a society? Who needs those explanations? In what context is interpretability a requirement? What are the legal grounds to demand an explanation?

We also need to consider the actors and stakeholders in XAI. A loan applicant requires a different explanation than a doctor in an intensive-care unit. A politician introducing a decision-support system for a public-policy problem should receive different explanations than a police officer planning a patrol with a predictive-policing tool. Yet what incentive does a model provider have to provide a convincing, trust-enhancing justification, rather than a merely accurate account?

As these open questions show, there are countless opportunities for non-technical disciplines to contribute to XAI. There is however little such collaboration, though much potential. For example, participatory design is well equipped to create intelligent systems in a way that takes the needs of various stakeholders into account, without requiring them to be data-literate. And the methods of social science are well suited to develop a deeper understanding of the context, actors and stakeholders involved in providing and perceiving explanations.

Evaluating explanations

A specific instance where disciplines need to collaborate to arrive at practically applicable scientific findings is the evaluation of explanation techniques themselves. Many have not been evaluated and most of the evaluations which have been conducted have been functional or technical, which is problematic because most scholars agree that ‘there is no formal definition of a correct or best explanation’.

At the same time, the conduct of human-grounded evaluations is challenging because no best practices yet exist. The few existing studies have often found surprising results, which emphasises their importance.

One study discovered that explanations led to a decrease in perceived system performance—perhaps because they disillusioned users who came to understand that the system was not making its predictions in an ‘intelligent’ manner, even though these were accurate. In the same vein, a study conducted by the author indicated that salience maps—a popular and heavily marketed technique for explaining image classification—provided very limited help for participants to anticipate classification decisions by the system.

Many more studies will be necessary to assess the practical effectiveness of explanation techniques. Yet it is very challenging to conduct such studies, as they need to be informed by real-world uses and the needs of actual stakeholders. These human-centered dimensions remain underexplored. The need for such scientific insight is yet another reason why we should not leave XAI research to technical scholars alone.

GerEdMin 1

Martin Schüßler

Martin Schüßler is a Phd candidate at TU Berlin, working at the
interdisciplinary Weizenbaum Institute.

You are here: Home / Politics / Explaining artificial intelligence in human-centred terms

Most Popular Posts

new world order,state,citizen A new world order: from warring states to citizensPaul Mason
Tesla,IF Metall,electric car,union US electric-car maker faces Swedish union shockGerman Bender
Israel,Hamas Israel and Hamas: the debasement of discourseRobert Misik
Israel-Palestine,refugee,refugees Israel-Palestine: a comparative perspectiveBo Rothstein
Germany,sick,economic Germany’s true economic diseasePeter Bofinger

Most Recent Posts

renewable,fossil-fuel,energy,renewables,inflation,prices The renewable answer to Europe’s fossil-fuel inflationFelix Heilmann and Maximilian Krahé
energy transition,climate,EU,NECP Energy transition: more ambition needed from EU27Chiara Martinelli
racism,Agency for Fundamental Rights, EU,FRA Tackling the scourge of racism across the EUMichael O'Flaherty
Media freedom,EMFA,media,free,independence,pluralism Media freedom: Europe’s media cannot be half-freeOliver Money-Kyrle and Renate Schroeder
Putin,Kremlin,partial mobilisation,patriotism Vladimir Putin’s killer patriotismNina L Khrushcheva

Other Social Europe Publications

Global cities cover pdf Global cities
strategic autonomy Strategic autonomy
Bildschirmfoto 2023 05 08 um 21.36.25 scaled 1 RE No. 13: Failed Market Approaches to Long-Term Care
front cover Towards a social-democratic century?
Cover e1655225066994 National recovery and resilience plans

Eurofound advertisement

How will Europe’s green transition impact employment?

Climate-change objectives and decarbonisation measures are vital for the future of Europe. But how will these objectives affect employment and the labour market?

In the latest episode of the Eurofound Talks podcast series, Mary McCaughey speaks with the Eurofound senior research manager John Hurley about new research which shows a marginal increase in net employment from EU decarbonisation measures—but also potentially broad shifts in the labour market which could have a profound impact in several areas.


LISTEN HERE

Foundation for European Progressive Studies Advertisement

Transforming capitalism in the age of AI

Will the EU once again accept Big Tech's power as a fait accompli while belatedly trying to mitigate risks, or can it chart a different course?

Join our conference on the EU approach to the digital transition. On Wednesday, December 6th, FEPS and the Friedrich-Ebert-Stiftung Competence Centre on the Future of Work are co-organising an evening of high-level debates on the digital future of Europe. There will be keynotes by the European Commissioner for Jobs and Social Rights, Nicolas Schmit; Evgeny Morozov, founder of The Syllabus; and Phoebe V Moore, globally recognised expert on digitalisation and the workplace. The event will be moderated by John Thornhill, innovation editor at the Financial Times.


MORE HERE

Hans Böckler Stiftung Advertisement

WSI European Collective Bargaining Report 2022 / 2023

With real wages falling by 4 per cent in 2022, workers in the European Union suffered an unprecedented loss in purchasing power. The reason for this was the rapid increase in consumer prices, behind which nominal wage growth fell significantly. Meanwhile, inflation is no longer driven by energy import prices, but by domestic factors. The increased profit margins of companies are a major reason for persistent inflation. In this difficult environment, trade unions are faced with the challenge of securing real wages—and companies have the responsibility of making their contribution to returning to the path of political stability by reducing excess profits.


DOWNLOAD HERE

ETUI advertisement

Response measures to the energy crisis: a missed opportunity to feed the socio-ecological contract

With winter coming and Europe ready to get through it without energy shortages, power cuts and recession, new research conducted by the ETUI in seven EU member states (AT-FR-DE-GR-IT-PL-ES) highlights that, with some 80 per cent of spending being directed to broad-based measures, short-term national government support during the recent energy crisis was poorly targeted. As a result, both social- and climate-policy goals were rather sidelined, with the biggest beneficiaries of public fossil-fuel subsidies being higher income groups and the wealthiest people.


AVAILABLE HERE

About Social Europe

Our Mission

Article Submission

Membership

Advertisements

Legal Disclosure

Privacy Policy

Copyright

Social Europe Archives

Search Social Europe

Themes Archive

Politics Archive

Economy Archive

Society Archive

Ecology Archive

Follow us

RSS Feed

Follow us on LinkedIn

Follow us on YouTube

Social Europe ISSN 2628-7641