European Parliament has passed with an overwhelming majority – 523 in favour and 46 against -- a law to regulate Artificial Intelligence (AI). It is called the Artificial Intelligence Act. It had been in the pipeline since 2021, and it gained momentum after OpenAI’s ChatGPT burst into the market, and a race has begun between the American tech behemoths – Microsoft, which has big stakes in OpenAI and Google – even as France’s AI startup Mistral AI and Germany’s Aleph Alpha are set to join the race.
The tech companies had been lobbying hard and exerting pressure on Members of European Parliament (MEP) to make the legislation less stringent. Observers have said the European MPs withstood the pressure. European Union (EU) Internal Market Commissioner Thierry Breton praising the legislation said, “We are regulating as little as possible and as much as needed with proportionate measures for AI models.” He said, “Europe is now a global standard-setter in trustworthy AI.”
The law puts restrictions on police using AI for general surveillance of people though it makes room for them to scrutinize the identity of potential terrorists. But the law is against inferring racial and sexual-orientation profiling based on AI models, especially in the use of biometrics and facial recognition in public places. And when the police want to use AI for profiling they get judicial approval. It is also felt that there are greater restrictions on AI models that seem to be risky.
The fines imposed for violation of the law range from $8.2 million to $32.8 million.
The United States and India are still in the process of regulating AI. China has rules in place. The advantage with the EU legislation is that there is an open debate about the pros and cons, about those favouring regulation and those opposing it, especially the tech giants. Those who oppose stringent regulation in Europe feel that it will put European tech companies at a disadvantage in making progress if the laws are too restrictive. On the other hand, the law requires that companies dealing with high-risk AI models must incorporate risk assessments before it is made available for public use.
The AI, like all breakthrough technologies, opens up new frontiers but it also comes loaded with dangers and risks. This was so with nuclear power and it will be so with AI. AI becomes a potential weapon in the hands of governments and big corporations to manipulate the minds of people at a mass level. It can take away much of the choice that individuals exercise through their own thinking and will power. It is this fear of manipulation of peoples’ perceptions and thinking that is bothering most people with the prospects of AI.
It is however certain that it would be difficult to put the genie back into the bottle. AI is out there and it will grow further. The only thing that can be done is put in place sensible regulation through a democratic process. The suggestions of experts are quite important in the matter, but the decision cannot be left in the hands of the experts exclusively. There is need for an open debate about the issues concerning AI.
The European Parliament, whatever its claims that it has come up with the best AI regulation, shows the way because it was in the public domain and the people know what the provisions of the law are. And there is a process of how the law will be implemented. There is no arbitrariness in its implementation because then it can give rise to abuse of the law by the authorities. And there will be need to modify the regulation and even make new laws as the situation changes and as technology evolves.