This series is a partnership with the Weizenbaum Institute for the Networked Society and the Friedrich Ebert Stiftung
As AI enters the workplace, we need to reflect upon the criteria by which human work is evaluated and human subjectivity depicted.
Control panels are the obvious place to run operations centrally. The control rooms of Star Trek’s fantastical Enterprise (and the hub of the actual Project Cybersyn under Chile’s radical president Salvador Allende) in the 1960s and 70s were however operated by humans with relatively primitive technologies.
Today, much of the work of the people we imagined in these rooms—the bouffanted women in silver A-line dresses and men in blue boiler suits pushing buttons to operate the manoeuvres of galactical imperialism—is done by computers. But what will happen when the proverbial windows looking out to the galaxies only display a cadre of robots and the control panels’ blinking lights are the only reflective glimmer?
So-called Industries 2.0-4.0 have seen an onslaught of machines and machinic competences in the workplace control rooms of today, via robotic process automation, semi-automation, machine learning and algorithmic management systems. Digitalised workplace design and surveillance techniques are oriented around the rise in new technologies, where the processing and quantification of workers’ data is seen to be necessary for a company’s competitivness.
People analytics
The contingent technology for workplace processes to reach a new pinnacle of computational sophistication is the rise in artificial-intelligence tools and applications. AI allows semi-automation of decision-making processes via machine learning, which is particularly applicable in the case of human-resource driven ‘people analytics’ (PA), where predictions and prescriptions about job candidates and workers—or ‘data subjects’ as the General Data Protection Regulation (GDPR) puts it—can now be made based on quantification techniques applied to data sets.
Put simply, with the use of PA, we are asking machines to relay truths, or subjective images about other people, via computation. While we once expected the machine to mirror the human, we now seem to be looking into a machinic mirror for our own reflection and those of others. The full implications of this ‘mirror stage’ of capitalism—to borrow a phrase from the psychoanalyst Jacques Lacan—are yet to be played out but are exceedingly important.
For Lacan, the mirror stage was the moment in which the child realises her separation from the rest of her environment. The mirror stage for what I am calling ‘smart workers’ within capitalism today must be a moment of defying the assumption that we are inexorably subsumed into a machinic subject, retaining the firm scaffolding of what makes us human and posing resistance to a purportedly automatic domination. Given growing expectations that AI will become universal, to avoid the most negative implications it implies for workplaces and workers with regards to automation and surveillance, it is increasingly important to exercise reflexivity and retain our human autonomy, as decision-making about workers is increasingly based on quantification and automation.
Machine learning
People analytics is perhaps the best-known form of AI-augmented workplace tool. Generally speaking, PA is a set of human-resource (HR) activities which rely on a process whereby managers can identify patterns and compare them across data sets collected about workers.
The AI component in PA lies in how algorithms are set up to make the decisions, via machine-learning procedures. Big data, algorithms and machine learning are central in digitalised recruitment, where decisions about talent spotting, interviewing, leadership prediction, individual worker performance, health patterns across workers and other operational management issues can be digitally assisted.
Indeed, machines become the mirror for workers’ subjectivities via quantification. Predictions are made about applicants regarding aptitude and job fit—and, once workers are in position, many things can be assessed, ranging from the diligence of their work to their likelihood for disengagement.
A Deloitte report indicates that 71 per cent of international companies have reported they value PA and see it as a priority, because it allows management to conduct ‘real-time analytics at the point of need in the business process … [and] allows for a deeper understanding of issues and actionable insights for the business’ to deal with what have been called ‘people issues’. In other HR-related reports, the revelations of ‘people risks’ and ‘people problems’ which PA can unveil throw the concept of the mirror phase of capitalism into sharp relief: who are we (humans), in the machine’s reflection?
Increased stress
PA is likely to increase workers’ stress if data are used in appraisals and performance management without due diligence in process and implementation, leading to complaints about micromanagement and feeling spied on. If workers know their data are being read for talent spotting or deciding possible layoffs, they may feel pressurised to advance their performance, and begin to overwork, posing significant risks. Another risk arises with liability, where companies’ claims about predictive capacities may later be queried for accuracy or personnel departments held accountable for discrimination.
Indeed, if algorithmic decision-making in PA does not involve human intervention and ethical considerations, this HR tool could expose workers to heightened structural, physical and psychosocial risks and stress. How can workers be sure decisions are being made fairly, accurately and honestly, if they do not have access to the data held and used by their employer? This should be dealt with to some extent in the European Union context with the GDPR but that is by no means a fait accompli.
PA practices are particularly worrying if they lead to workplace restructuring, job replacement, job-description changes and the like. In any case, the use of machine learning to make predictions and provide analyses about people relies on specific kinds of intelligences prioritised under capitalism—efficiency, reliability, competitiveness and other data-driven imperatives—which may or may not reflect who individuals are, or would like to be, in modern society.
Research necessary
Many high-level governmental and organisational reports are predicting that AI will improve productivity, enhance economic growth and lead to prosperity for all—in a similar way ‘scientific management’ was once heralded. As with scientific management, however, high-level discussions do not seem to link the anticipated prosperity directly with the realities of the everyday (and everynight) human work which ultimately fuels growth. Meanwhile, various AI-augmented tools and applications are being introduced to improve productivity, in factories and offices and ‘gig’ work.
There is a lot of research on automation but not on how AI, as a form of semi-automation, carves out the capacity for substitution of human activities in the workplace. There is also extensive research on surveillance, but again not scrutinising how AI facilitates advances in surveillance in the workplace.
Scholarly and governmental research on these subjects should take AI seriously by putting a metaphorical mirror into place for social reflection about how these processes occur and on which assumptions they rest—rather than presenting AI merely as forms of autonomous software and immutable techniques for facilitation. While there have been significant inroads in climate, medical, fashion, insurance and justice-systems research, studies on AI’s uses to evaluate workers and aptitudes through quantification are lagging behind. Stories of discrimination and bias are already making headline news where PA has been applied and, without reflection on the mistakes made in AI and quantified analyses of workers, this is set to continue and even get worse.
Digital democracy
The rise in data accumulation in recent times and the reliance on algorithms for workplace decisions has led to the possible removal of the role of the physical manager through a machinic system. If workers were to take over workplace control rooms through deciding which tools and processes are applied, digital democracy at work could be imagined.
But the use of AI undemocratically could just as easily occur and lead to the removal altogether of human autonomy, via automation, from workplace decision-making and tasks. The current Covid-19 crisis has also led to the rise in online working, giving increased leeway for quantified judgements and machinic management.
More research is needed in these areas, to get a full picture of what AI will mean and, in many cases, already means for human-machine relations in workplaces. What precisely are the types of intelligences which we expect today from machines and are these really reflective of human intelligence? Why do we choose the categories of intelligence that we do, and how are data collection and processing activities relevant to the affective side of the human experience?
Perhaps most importantly, what are the surrounding risks for workers as technology advances and as we begin to question our own role in production and think about that of the machine, as AI is set to increase its autonomy? The question more broadly for humanity is: who do we think we are as we reach the mirror stage in capitalism, where we should realise we are separate and retain autonomy from a machinic subject?
As we busily instal machines into workplaces via robotics and management tools with seemingly superior intelligence to ourselves, we should ask: in whose (or which) reflection are we now looking?
Phoebe Moore is associate professor in political economy and technology in the School of Business at the University of Leicester and director of its Centre for Philosophy and Political Economy.