The world’s largest companies cannot be given free rein in their competition to capitalise artificial intelligence.
What is the best way to develop artificial intelligence? This question, long theoretical, is quickly becoming a hands-on concern, which will soon demand that important strategic choices be made. We are seeing two completely different approaches play out before our eyes.
One is the race among global technology giants which began with the recent launch of the Microsoft-funded ChatGPT, already provoking promises of similar systems from Google and the Chinese company Baidu. In these cases, the market (or, rather, the profit motive) is the mechanism driving companies to make decisions they probably should—and perhaps would prefer to—postpone.
The other is the European Union’s process to regulate artificial intelligence, especially through the comprehensive AI Act legislative package and related standards. This will classify systems according to four risk levels: unacceptable, high, limited and minimal/zero. Here, the modus operandi is democratic and political, expressed through legislation and other forms of regulation.
So far, the market has been dominant—although often supported by government loans and public investment in research and entrepreneurship. This is partly due to the rapid pace of technological change, making it difficult for democratic institutions and nation-states to keep up. Legally as well as technically, regulating AI has proved extremely difficult.
International efforts
ChatGPT illustrates some of the difficulties—and the risks. The system is accessible all over the world and will probably spread quickly when Microsoft incorporates it into its many products and allows other companies to do the same. Similar diffusion effects are likely to follow from Google’s and Baidu’s AI systems.
It is not evident how to regulate effectively this inherently global and amorphous technology, and the influence of individual countries may well be very limited. For this reason, international efforts are under way. The United Nations Educational, Scientific and Cultural Organization launched the first global agreement on the Ethics of AI in 2021 and there are now processes around the world—even in the United States—to regulate AI in various ways.
The EU initiative to create legislation and standards to ensure ethical and sustainable AI has been a catalyst. The union’s influence should not be underestimated: a market of 500 million people with globally high incomes is not insignificant. The hope is that a rapidly developed EU regulation could have a normalising effect on the world market—although that is a highly uncertain prospect.
One hurdle relates to Baidu’s involvement in the race: authoritarian governments intend to use AI technology for surveillance and repression, and it is not very plausible that such states would voluntarily comply with EU regulations. In the case of China it is not even necessary, as the domestic market is large enough for AI technology to be developed quickly and according to completely different rules and norms than in the US or Europe.
Explosive spread
Within two months of its launch, ChatGPT had accrued 100 million users. Market forces can bring about the explosive spread of AI technologies not fully understood by the companies which develop them and which they are reluctant to explain to outsiders. Several users have managed to ‘trick’ ChatGPT into giving answers of which it is not supposed to be capable: providing recipes for drugs or explosives and expressing racist views.
Why does ChatGPT display this unexpected and contradictory behaviour? No one really knows and the company behind the technology, OpenAI, is hardly open when it comes to information. So I asked ChatGPT. Cryptically, it replied: ‘[A]s an AI model trained by OpenAI, I am constantly under development and improvement.’
It seems very difficult, even for the system’s creators, to regulate its usage and functionality. Even more caution should guide the use and spread of more complex systems with even more serious consequences that we cannot predict and prevent. The risks of letting the market disseminate AI technology while sparking an arms race among competitors should be obvious.
Potential harms
Many scholars have pointed to the societal dangers of rapid and uncontrolled AI development. Daron Acemoglu, professor of economics at the Massachusetts Institute of Technology, has warned that rapid technological change has to be paired with welfare and labour-market reforms to prevent growing inequality, erosion of democracy and heightened polarisation. A similar approach has been proposed by the Harvard economist Dani Rodrik: ‘Government policies can help guide automation and artificial-intelligence technologies along a more labour-friendly path that complements workers’ skills instead of replacing them.’
The Swedish mathematician Olle Häggström, an expert on the existential risks of AI, has long warned of potential harms ranging from surveillance and unemployment to autonomous weapon systems and a future where machines simply take over the world. He has been concerned about the launch of ChatGPT. He sees it as an indication of how difficult will be the so called AI-alignment problem—designing into an AI system all the guardrails against going off track—for more capable, and consequently more dangerous, systems.
There is a case to be made for stopping or severely restricting certain types of systems until we better understand them. This would require extensive debate, education and communication, grounded democratically. Otherwise, there would likely be pushback from publics deprived of these technologies (especially if people in other countries could use them). This has already happened with the General Data Protection Regulation: while it has very many positive aspects, the GDPR is widely disliked and ridiculed, the EU having managed to paint itself as a bureaucratic colossus intent on making it harder for regular people to use the internet.
More and better
Some view AI regulation as an obstacle to innovation and warn that the European approach could reduce the EU to an AI backwater. Meanwhile, the US and China would leap ahead, being less concerned with legislation, industry standards and ethical guidelines.
Of course, regulation is not perfect and states are not always benevolent nor wise. Clearly, we need a combination of market and regulation. But given the potential risks, and the fast pace of technological development and diffusion, we have had too much of the former and too little of the latter.
Markets do not sufficiently counteract ‘negative externalities’ (undesirable side-effects). They do not distribute the economic value created by technology in a sustainably equal way. They do not incorporate social or political considerations. And they rarely contribute to technological solutions which benefit society as a whole.
Most of us want more and better AI. But for that, we need more and better regulation.
German Bender is chief analyst at the Swedish think-tank Arena. A PhD candidate at Stockholm School of Economics, he was a visiting research fellow at Harvard Law School in 2023.