This series is a partnership with the Weizenbaum Institute for the Networked Society and the Friedrich Ebert Stiftung
Artificial intelligence should assist human work, rather than be a rival to it, and ‘good work by design’ should be the guiding principle of its introduction.
The discussions about digitisation and artificial intelligence (AI) mostly take place from the perspective of industrial production, as is evident from the ‘Industry 4.0’ debate which dominates in Germany. By contrast, little attention has been paid to tasks involving the handling of individual cases and how they shape large parts of the service sector as well as, indirectly, industrial companies (‘white-collar work’). The ‘smartAIwork’ research project has however investigated the effects of AI in case handling and developed design solutions.
Case handling mostly involves administrative or office work. The spectrum ranges from simple data entry to complex tasks that require a high degree of creativity and knowledge, such as in information-technology development or application of legal regulations. Simple office tasks with high portions of routine work—maintaining address files, for instance—are suitable for (partial) automation by means of software and algorithms. AI, on the other hand, is used to assist people in performing demanding case-handling tasks. The aim should be to ‘relieve’ the work of monotonous, burdensome aspects and to create more space for the ‘actual’ work.
Typical applications for AI in the office include:
- ‘intelligent’ chat bots which are capable of learning in customer service, including in banks or local public transport;
- AI-supported assistants within human-resource management, or ‘AI recruiters’, and
- ‘intelligent’ robotic process automation for document management, such as for settling accounts for business trips or in procurement.
AI is currently not very widespread, however, and no more than a quarter of companies use corresponding technologies in their office work or plan to do so. Since, compared with industrial production, case handling is less easy to translate into standard processes, the opportunities for using AI in office work are limited.
‘Human factor’
This is especially so where the ‘human factor’ plays a major role—in the individuality of customer requests in banking or, more generally, where greater trust in decisions or an ability to contextualise is required. Furthermore, as the ‘smartAIwork’ project also shows, there are hurdles when it comes to the availability and quality of data for AI applications. This is a major challenge, especially for small and medium-sized enterprises.
Whether case-handling activities are replaced by AI should not however just depend on whether suitable uses can be found and whether substitution is technologically possible. There are sometimes good reasons for not automating certain activities. In addition to economic efficiency, these include legal restrictions, such as European Union constraints on legitimate data use.
Moreover, the combining of automated and non-automated activities in professional tasks can mean that the complexity of the tasks increases, which can increase workload. In addition, AI is only designed for a narrowly limited area of application and only shows its capabilities to their best advantage there. The inability to respond adequately to unpredictable changes in the work process outside of its defined field of application therefore places a technological limit on its use.
New interactions
AI is however expected to lead to new forms of interaction between humans and technology, which can simultaneously improve human work and increase the efficiency of work processes. The issue of AI use is thus not just one of rationalisation and automation but particularly of assisting human work, which can also lead to improved working conditions. For AI to be effective in this sense in the office, operational concepts must be designed on the basis of suitable general conditions.
The results of ‘smartAIwork’ show that the potential risks of using AI—particularly job losses and deskilling—can be avoided if certain factors are taken into account: legal and ethical standards, ergonomic findings about good work design and participative approaches to planning and implementing AI projects. The latter also help increase the extent to which AI is accepted by those employed in case handling. There is a greater chance of improving working conditions and results if AI is used as an assistant, not as a rival, to human work.
To establish the necessary general conditions and participatory processes, the support of politicians and social partners is required. They are asked to play their part to ensure that AI support in case handling leads to ‘good work’.
Ethical guidelines
In March, to mark the opening of the ‘AI Observatory’ of the Federal Ministry of Labour and Social Affairs, the German services union ver.di published ‘Ethical guidelines for the development and use of artificial intelligence (AI)’. These should serve as the basis for discussions with developers, programmers and decision-makers. Their target group also includes employees who are involved in the conception, planning, development, purchasing and use of AI systems in companies, and who therefore bear responsibility for them.
The union took a position on AI for the first time at the end of 2018, emphasising that the goals behind its development and deployment were central. AI should serve people—so the goals of, and premises for using, AI must be defined as precisely as possible. It is of the utmost importance that ‘good work by design’ is the approach from the start. To implement this, employee representation needs to be strengthened: participation needs to be ensured as early as possible during planning.
With a view to the impact AI will have on employment, we urgently need a targeted and strong commitment from politicians to establish employment relationships that have social-security protection, to strengthen the collective-bargaining system, to distribute employment fairly and to upgrade the social services required in society. A political debate is necessary concerning the areas in which AI assistance makes sense and is socially desirable. Assistance systems should also be preferred to autonomous systems, in terms of risk and workload management.
Additional training
Options for lifelong, in-service training must be established to be able to counter the rapid shift in the AI-shaped world of work from the point of view of the labour force—for example, through state-sponsored part-time work combined with continuing professional development, and a right to such additional training enshrined in a nationwide law. Ethical, social and democratic aspects need to be integrated into this education and further training, which is mostly otherwise of a technical nature only.
More binding worker protection and the safeguarding of personal rights are also required. Employee data protection is overdue, because the special dependency of employees is particularly evident in the AI context. For example, a ban on the collection and processing of biometric data from employees is urgently needed, as ‘pilot projects’ that use AI in call centres make clear. The ‘Ethical guidelines’ follow up on these positions and deepen them—particularly with a view to providing guidance and support for those who develop, introduce and use AI applications.
Markus Hoppe is a sociologist on the research staff of INPUT Consulting gGmbH in Stuttgart, focusing on the transformation of work through digitisation and AI, industrial relations and industrial sociology. Dr Nadine Müller is head of the department 'Innovation and Good Work‘ in the ver.di federal administration in Berlin.