This series is a partnership with the Weizenbaum Institute for the Networked Society and the Friedrich Ebert Stiftung
Since AI involves interactions between machines and humans—rather than just the former replacing the latter—’explainable AI’ is a new challenge.
Intelligent systems, based on machine learning, are penetrating many aspects of our society. They span a large variety of applications—from the seemingly harmless automation of micro-tasks, such as the suggestion of synonymous phrases in text editors, to more contestable uses, such as in jail-or-release decisions, anticipating child-services interventions, predictive policing and many others.
Researchers have shown that for some tasks, such as lung-cancer screening, intelligent systems are capable of outperforming humans. In many other cases, however, they have not lived up to exaggerated expectations. Indeed in some, severe harm has eventuated—well-known examples are the COMPAS system used in some US states to predict reoffending, held to be racially-biased (although that study was methodologically criticised), and several fatalities involving Tesla’s autopilot.
Black boxes
Ensuring that intelligent systems adhere to human values is often hindered by the fact that many are perceived as black boxes—they thus elude human understanding, which can be a significant barrier for their adoption and safe deployment. Over recent years there has been increasing public pressure for intelligent systems ‘to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made’. It has even been debated whether explanations of automated systems might be legally required.
Explainable artificial intelligence (XAI) is an umbrella term which covers research methods and techniques that try to achieve this goal. An explanation can be seen as a process, as well as a product: it describes the cognitive process of identifying causes of an event. At the same time, it is often a social process between an explainer (sender of an explanation) and an explainee (receiver of an explanation), with the goal to transfer knowledge.
Much work on XAI is centred on what is technically possible to explain and explanations usually cater for AI experts. But this has been aptly characterised as ‘the inmates running the asylum’, because many stakeholders are left out of the loop. While it is important that researchers and data scientists are able to investigate their models, so that they can verify that they generalise and behave as intended—a goal far from being achieved—many other situations may require explanations of intelligent systems, and to many others.
Many intelligent systems will not replace human occupations entirely—the fear of full automation and eradication of jobs is as old as the idea of AI itself. Instead, they will automate specific tasks previously undertaken (semi-)manually. Consequently, the interaction of humans with intelligent systems will be much more commonplace. Human input and human understanding are prerequisites for the creation of intelligent systems and the unfolding of their full potential.
We need your support.
Keep independent publishing going and support progressive ideas. Become a Social Europe member for less than 5 Euro per month. Your help is essential—join us today and make a difference!
Human-centred questions
So we must take a step back and ask more values- and human-centred questions. What explanations do we need as a society? Who needs those explanations? In what context is interpretability a requirement? What are the legal grounds to demand an explanation?
We also need to consider the actors and stakeholders in XAI. A loan applicant requires a different explanation than a doctor in an intensive-care unit. A politician introducing a decision-support system for a public-policy problem should receive different explanations than a police officer planning a patrol with a predictive-policing tool. Yet what incentive does a model provider have to provide a convincing, trust-enhancing justification, rather than a merely accurate account?
As these open questions show, there are countless opportunities for non-technical disciplines to contribute to XAI. There is however little such collaboration, though much potential. For example, participatory design is well equipped to create intelligent systems in a way that takes the needs of various stakeholders into account, without requiring them to be data-literate. And the methods of social science are well suited to develop a deeper understanding of the context, actors and stakeholders involved in providing and perceiving explanations.
Evaluating explanations
A specific instance where disciplines need to collaborate to arrive at practically applicable scientific findings is the evaluation of explanation techniques themselves. Many have not been evaluated and most of the evaluations which have been conducted have been functional or technical, which is problematic because most scholars agree that ‘there is no formal definition of a correct or best explanation’.
At the same time, the conduct of human-grounded evaluations is challenging because no best practices yet exist. The few existing studies have often found surprising results, which emphasises their importance.
One study discovered that explanations led to a decrease in perceived system performance—perhaps because they disillusioned users who came to understand that the system was not making its predictions in an ‘intelligent’ manner, even though these were accurate. In the same vein, a study conducted by the author indicated that salience maps—a popular and heavily marketed technique for explaining image classification—provided very limited help for participants to anticipate classification decisions by the system.
Many more studies will be necessary to assess the practical effectiveness of explanation techniques. Yet it is very challenging to conduct such studies, as they need to be informed by real-world uses and the needs of actual stakeholders. These human-centered dimensions remain underexplored. The need for such scientific insight is yet another reason why we should not leave XAI research to technical scholars alone.
Martin Schüßler is a Phd candidate at TU Berlin, working at the
interdisciplinary Weizenbaum Institute.