Sharing information accelerates the pace of co-innovation, facilitating inter- and multi-disciplinary research and study.

The saying ‘information is power’ relates our level of influence and power to the amount and quality of information we own. Today, more than ever, and thanks to technology, we share data and information to reach greater influence and power, to optimise decisions, detect key insights to a given phenomenon or customise user experience. (Big) data allow us to get insights and counter-intuitive information, or for instance to track in time and space the evolution of information on a phenomenon.
In the Covid-19 pandemic, open-data platforms, open research and open-source software programmes demonstrated the super power of the opening paradigm—sharing information to overcome a large-scale challenge, such as forecasting virus propagation based on worldwide data collection and collaboration.
A year ago, the rise of technologies such as ChatGPT resulted in discussions about protecting human rights or freedom and democracy by sharing information on how some algorithms are developed. More recently, a global discussion on open artificial intelligence, which led to an open letter, has focused on how to take greater advantage of open-source algorithms to accelerate and challenge any piece of implemented algorithms to the benefits of all, while decreasing any threats.
The legacy paradigm
Information has been part of the economy and has been used as leverage to increase the power of an individual or institution. Scientific globalisation was made possible with the Gutenberg press in the 15th century and the Watt steam engine in the 18th. These two innovations made it possible to share knowledge, discoveries and theories. The first countries that took advantage of the novelties were able to own the knowledge and the innovations, and increase their power. The same applies to companies or individual inventors.
Now, with big data and AI feeding algorithmic models, information is collected, structured and processed automatically on a large scale to provide understandings, predictions or answers to specific questions. Despite the obvious benefits in many fields, AI presents threats we need to fight, such as discrimination and environmental impact. Additional challenges are the limitations on the size of datasets and the talent pool to which we need access to come up with novel technologies. Opening some datasets and computer source codes can help us overcome these limitations and develop next-generation breakthrough innovations while protecting our fundamental rights.
Sharing information
Sharing information makes it easier for anyone to overcome the most challenging obstacles by accelerating the pace of co-innovation, facilitating inter and multi-disciplinary research and study, and expanding and propagating scientific knowledge and advanced research results.
During the pandemic, many countries shared their health statistics to feed predictive models and get relevant insights on Covid-19 in a short time, which accelerated research in AI applied to healthcare and increased interest in accelerating the academic peer-review publication process. Sharing information generally enables large-scale, data-driven decisions to manage crises.
Some AI-based technologies require diversified, large-scale datasets that often can be retrieved only by accessing open-data sources, such as ImageNet, a platform used to train image-recognition algorithms. Finally, training datasets from large databases enriches and diversifies perspectives by offering greater diversity and representativeness, thus decreasing the likelihood of bias and guaranteeing the inclusiveness of the resulting innovation.
This new paradigm based on sharing information also enables us to protect the fundamental rights of people as it encourages key players to share how they built technologies that can have a significant—in many cases negative—impact on free will and democracy. The accelerated propagation on ‘social media’ of conspiracy theories and ‘fake news’ demonstrates the urgent need to make publicly available the recommendation algorithms on platforms such as X, Facebook, TikTok and ChatGPT.
Concrete examples
Open-research publication platforms such as Science Open, Open Access, ResearchGate or Welcome Open Research enable the sharing of research results and methods, thus accelerating and facilitating academic research and developments. They confront outputs, helping improve consensus, scaling solutions to practical problems more quickly and translating them more easily to industrial applications.
Open-source software and libraries are enabling faster co-developments by providing developers, scientists and engineers with ready-to-use computer programmes and software functionalities with access to the source code (open-source software) or without (libraries and application programming interfaces). The Python library named TensorFlow is commonly used by anyone implementing machine-learning algorithms.
Open-data platforms such as World Bank Open Data or the World Health Organization’s open-data repository make possible the creation of representative training datasets to analyse an issue or increase the accuracy of statistical metrics. This allows us to create more efficient algorithmic models to solve large-scale and complex problems. We could also mention the United States Census Bureau, the Bold Open Database by Veuve Clicquot or more recently Météo France, which will soon share publicly its data to lever the competences of talented individuals for the analysis of climatology and real-time weather data.
How to act
Actors in the private sector need to distinguish between the algorithmic technologies and data that are key to their intellectual-property and business model and the secondary ones that eventually support the former. They can also share specific pieces of their source code to take advantage of the open-source paradigm, including code and model benchmark, algorithmic-bias detection or general improvements, while preserving their intellectual property for a given period. In addition, without opening some of the source code of their technology, they can envisage sharing and opening some or all of the dataset they used to build the algorithm(s) embedded in their technology.
Sharing the best practices that define part of algorithmic governance might make teams and companies more competitive, since they become more trustworthy and therefore more attractive to users, consumers, the public and markets. Finally, sharing mistakes and failed attempts as well as learnings is also critical in the openness paradigm, which will provide every actor with a safe space to share, discuss and challenge one another.
There is a growing discussion around ‘openness’, which is likely to become a standard vision for public and private institutions. Next-generation expectations include building and deploying a concrete and specific open strategy by defining the innovation components to open, such as the data, the algorithm and the source code, as well as the conditions for sharing. This is part of data and algorithmic governance.
This was first published by the London School of Economics Business Review