As citizens, we grant public bodies authority over life and death—but what do we demand in return for surrendering this power? Traditionally, we require transparency about how this authority is exercised, and we insist that democratic institutions—the media, judiciary, and relevant ombudsmen—maintain constant scrutiny over these processes. Yet with the digital transformation, we have already encountered situations where such oversight has become impossible.
Recent examples from Sweden illuminate how algorithmic power, where the state operates through algorithms, can challenge the very foundations of our democracy. This crisis is playing out across Europe, a continent standing at the crossroads between tech giants, regulation, and the digitalisation of public administration, demonstrating why it is urgent to learn from examples of algorithmic power’s misuse in countries such as Sweden. To contribute to this vital discussion, our edited book Algorithmic Rule: AI and the Future of Democracy in Sweden and Beyond, published in cooperation with the Foundation for European Progressive Studies (FEPS), appears in November.
Algorithms have become omnipresent in our societies. Consider the administrative tasks of medical diagnosis, allocating pupils to schools, or determining entitlement to social insurance and employment benefits. Such tasks can clearly be improved in both quality and efficiency—but it is obvious that the consequences of inaccurate algorithms can prove disastrous. We know this, unfortunately, through hard experience in cases where transparency and scrutiny have not been maintained. People’s lives have been put at risk, governments have (rightly) borne the consequences through resignation, and sovereignty has been willingly handed over to non-democratic private organisations. We can no longer pretend that all of this is happening in some distant, “digital” sphere. It is happening right here, in the heart of our society.
With its law from 1766, Sweden is said to be the first country in the world to establish both freedom of the press and public access to official government records through freedom of information for every citizen. These principles remain held in the highest regard and are frequently invoked both to check executive power and to ensure that processes at every level remain legal, just, and inclusive. They are, on paper, fully applicable to the digital world—but we have observed troubling examples that suggest the opposite is occurring in practice.
With the digital transformation of public administration, the automation of public decision-making, and the accelerating “datafication” of welfare systems, we are jeopardising these fundamental freedom of information principles. When public administration moves into a black box—through the use of commercial proprietary software for even the most fundamental governmental tasks, outsourcing political decisions to programmers, and hiding crucial trade-offs within impenetrable code—we challenge both public oversight and explainability. The end result of this trajectory, we argue, is an “algocracy”: bureaucracy concealed inside algorithms, beyond the reach of democratic scrutiny.
The human cost of algorithmic scandals
We use the term “algocracy” to describe most if not all of the algorithmic scandals that have arguably faced many countries, with the most devastating EU examples being the Dutch childcare benefits system and the UK Post Office scandals. In both cases, government-run algorithms led not only to court cases and wrongful convictions of thousands of innocent people but also to devastating losses of livelihood, bankruptcies, and even suicides. These are not abstract technological failures—they represent profound human tragedies born from the intersection of flawed systems and absent oversight.
At the same time, we desperately need the digital transformation and its many promises—but only if implemented in the correct way. A deeply worrying trend is that many of the values required for a democratic digital state are increasingly seen as trade-offs ready to be sacrificed on the altar of efficiency and economic growth. One particularly horrifying example involves a generative chatbot that incited a young person to take his own life.
Rather than drawing crucial lessons from similar incidents, there are disquieting signs that large parts of the world are willing to throw safety regulations away, encouraged by tech companies and their shiny large language models (LLMs)—the main building blocks of many AI systems. This recklessness is often justified in the name of innovation, repeating the old phrase that regulation stifles innovation. However, the dichotomy between regulation and innovation is ultimately a false one—on the contrary, regulation often provides fertile ground for genuine innovation. Politicians of all stripes must be warned against falling for this dangerous cliché.
Several voices have raised urgent concerns about the growing commercial influence that can result from naive public procurement of digital solutions. The algorithms that the public sector buys off the shelf from big tech companies will not be written with “European values” in mind—they are designed to maximise profit, not democratic participation or social welfare. From this uncomfortable truth, only one conclusion can be drawn: Europe needs to take back control—of our data, and also increasingly of the power to code around this data. Only inclusive technology that has explainability and transparency as its foundation will work in the interest of the many rather than the few.
Learning from Sweden’s digital frontier
In light of several current EU processes aiming to control the potential harms of digitalisation, Sweden offers crucial lessons about shedding naivety around algorithmic power, which we describe in the book. When automated school placement systems misallocated thousands of students, the local government refused to share the underlying code and blocked legal inquiry, arguing that plaintiffs could not prove—without access to that very code—that disclosure was necessary. This Kafkaesque circularity reveals how algorithmic opacity can shield authorities from accountability. When the Swedish Security Service recently uploaded their morning running routes to fitness apps, they compromised the safety and integrity of their highest-level security operations, repeating the exact mistake the US Secret Service had made years earlier.
These failures occur even as the Swedish public sector positions itself at the vanguard of algorithmic innovation—using automated management systems for government work, deploying chatbots for citizen services, and transferring highly sensitive personal data to private sector partners. The pattern is clear: technological ambition without adequate safeguards transforms efficiency gains into democratic deficits, turning tools meant to serve citizens into shields that protect systems from scrutiny.
Laws on transparency are already being broken across Europe underscoring the urgent need for robust regulation and genuine democratic control. Another essential lesson is to acknowledge the inherently political nature of this transformation. Technological change implies fundamental power shifts—shifts that cannot mindlessly be handed over to the companies that produce these products. Rather, they call for rigorous scrutiny and democratic oversight, subjected to actual influence by their users—not least the workers increasingly affected by algorithmic management systems that determine their working conditions, performance evaluations, and career prospects.
These lessons must guide the future of the algorithms managing our public administration, our workplaces, and ultimately our democracy. The Swedish experience offers both a warning and a roadmap: a warning about what happens when democratic oversight fail to keep pace with technological change, and a roadmap for how European nations can reassert democratic control over their digital futures. As Europe stands at this critical juncture, the choice is clear—either we shape technology to serve democratic values, or we allow technology to reshape democracy itself into something unrecognisable and fundamentally unaccountable.
