After intense and extended parleying, European legislators and executives have reached an agreement over the Artificial Intelligence (AI) regulatory framework. This agreed framework will have to be passed by the European Parliament in Brussels and EU member-states have to ratify it. The Chinese and the United States have something in place in the shape of AI regulation, but the EU officials claim that theirs is the complete framework.
The EU AI regulatory framework seeks to protect the fundamental rights of individuals by not allowing facial recognition, and at the same time, it wants AI systems to be used in detecting serious crimes like terrorist acts. Claimed EU Commission President Ursula von der Leyen: “The AI Act is a global first. A unique legal framework for the development of AI you can trust. And for the safety and fundamental rights of people and businesses. A commitment we took in our political guidelines – and we delivered. I welcome today’s political agreement.”
The EU lawmakers and policymakers had to do a balancing act of providing enough encouragement to European AI companies to surge forward with new and advanced products as is happening in the American and Chinese AI sectors because this is indeed at the moment the technological frontier. The EU politicians did not want to place restrictions on European AI ventures compared to their counterparts elsewhere. So, the compromise they struck was that the AI companies will have to disclose the AI component in their software systems they are offering in the market. A provision has been made, that of the General Purpose Artificial Intelligence (GPAI) that will have lesser restrictions and lesser compliance obligations.
On the other hand, AI systems like facial recognition and biometrics and manipulative behaviour components will have limited scope. The penalty provisions for non-compliance and non-disclosure have been set at seven per cent of the global profit of the company or 35,000 euros, whichever is higher. And there are lesser penalties for other failures. The EU AI office, attached to the European Commission, will monitor the AI businesses for violation of rules.
The test of this EU AI regulatory framework is whether it provides a level playing field for the European AI companies like Germany’s Aleph Alpha and France’s Mistral AI. But the business lobbies are not too happy with the agreement reached by the politicians. The Europe chief of Washington-based Computer, Communications Industry Association (CCIA) Daniel Friedlaender said, “Regrettably speed seems to have prevailed over quality, with potentially disastrous consequences for the European economy.”
The European dilemma is whether the regulation should be too stringent so that it becomes an obstacle for technological innovation, and therefore you fall behind your global competitors, American or Chinese. The other alternative of making it a loose set of rules which are ineffective is not a choice either. And of course there is no question of leaving the AI sector unregulated, which it is in Britain at the moment.
The dangers of the misuse of AI are much too real. And AI apps like ChatGPT can be subversive in more ways than one, and there is need to protect human initiative and signature. It may be necessary for the AI companies to be equal stakeholders in keeping AI and its apps safe enough to protect human freedoms. And when the companies develop a new AI app or system, they would be need to test in the laboratory what its potential misuses could be before releasing it in the market. It is not going to be an easy task for politicians, policymakers, technologists and businesses to keep an eye on when and where they are crossing the danger marks