We cannot banalize the power and importance of artificial intelligence (AI) in the modern economy. Nor can we legitimize everything it can be used for across Europe and beyond. AI, along with trans-humanism and the like, is the next frontier of the fourth industrial revolution (Industry 4.0). These all have the potential to transform humanity’s understanding of herself in ways the world has never known.
The market for AI is massive. The expertise needed in the field is growing exponentially; in fact, firms are unable to meet the demand for specialists. Contributions of AI to both advanced and emerging economies is significant and it is also powering other fields that once depended on manual labor with painstakingly slow processes. For example, precision agriculture now uses drones to help irrigate and monitor plant growth, remove weeds and take care of individual plants. This is how the world is being fed. Journalists are using drones to search for truth in remote areas. Driverless cars are being tested. Drones are doing wonders in the logistics and supply chain areas. But drones are also used for killing, policing and tracking down criminal activities. There are many other advantages of AI in the health sector, elderly care and precision medicine. AI machines have the capacity to do things more efficiently than humans or even tread spaces that are more dangerous for humans. This is the gospel. Take it or leave it.
But there is more to the above. What is also true is that ‘the world is a business’ and business is politics that controls science, technology and information dissemination. These three entities know how to subliminally manipulate, calm, manage and shape public sentiments about anything. They control how much knowledge we can have and who can be vilified for knowing or speaking the truth, demanding an ethical approach to the production and use of AI or turned into a hero for spinning the truth.
Additionally, AI depends massively on data in order to mimic humans. Here, questions of privacy breach arise because individuals’ freedom as private citizens is long gone as in an Orwellian dystopia. Countless apologies have been issued by tech companies for previous breaches and even the sale of private data. Moreover, will AI’s constant perfection replace too many jobs too quickly? Others think that AI will rather allow us to devote more time to creative endeavours. We are urged not to despair, for these are just symptoms of advancement. Currently however, such consolations are just conjectures rather than fact. Isn’t it time to adjust our educational curricula to reflect this change? One wonders why these pressing issues aren’t part of electorates’ concerns.
Too many unknowns
What we don’t know is that there are hierarchies in the relevance of things that matter to the corporation or even a new AI start-up. Financial bottom line is at the top of it all. It is the need to survive by beating the competition. It is about being the biggest, the best and most innovative so as to avoid new disruptive technologies relegating incumbents to the sideline. It is about attracting more investors and a bigger market share or snatching a chunk of the global AI brand share. Most importantly, AI as the latest frontier of medico-techno-scientific advancement means that nations that thrive in it are seen as the most advanced. It is good for reputation. These things matter more than who will suffer because of AI. In the age of responsibilization people must know that they are on their own.
Another fallacy is that we don’t know enough about AI. The public may not know but does it mean that the companies that make it or the governments that purchase or sponsor it for their military industrial complex do not know what potential consequences AI is capable of producing?
Policymakers are far behind in regulating AI. How do we regulate what we know little about? Who is to teach them if it is not in the interest of those who benefit? If the industry owns the legislature through lobbying, how do we regulate anything and ask for corporate social responsibility (CSR) (which is classically failed-at-birth because it is voluntary)? AI is not only to help us do things better, faster and with higher efficiency. We must recognize the multi-purpose use of AI in order to have even a modicum of understanding of its complexity and the original motive behind it. We cannot leave this to CSR. Binding legislation and industry regulations must keep abreast of this new change and more people must be educated to understand what we are dealing with.
We must live with it by dealing with it
AI is here to stay or at least the technologists, industry politicians and investors have decided so. In and of itself, AI is a good tool but what humans and governments do with it is quite another. What is the responsibility of the firm in producing and using AI? Will digital authoritarianism, mass surveillance’s omniscience and omnipresence rule?
There is much excitement in the air about the endless potential AI provides. But we live in a world in which money rules and ethics is deemed a weakling’s issue. Were Adam Smith, Joseph Schumpeter, Chydenius here, what would be their views about AI? The economics, entrepreneurship, innovations and freedoms associated with AI in human lives are still unfolding. These thinkers and economists will perhaps talk about the institutions that shape entrepreneurial ecosystems of AI. Kant will also chip in with his ‘greater good’ sermon – but who or what will this greater be? He is not here so we have to answer.
So, the question is, which industrial policies will promote the proper use of AI for the greater good through ethical responsibility in the midst of profits, power, politics and polity? Woe to us if AI gets into the wrong hands. We should be aggressively mitigating the prospects of terrorists and criminals producing and making use of AI in ways that will affect society. This is an urgent call to action.
Listen to the audio version of this article