In a manner of speaking, the United States is running with hares and hunting with the hounds in the case of Artificial Intelligence (AI). Americans in private business and in government have recognised that they have come upon a technology which can be misused which could lead to catastrophes and not just glitches and snags.
Signing an executive order at the Oval Office in the White House on October 31, Biden pronounced with surprising clarity and a sense of finality: “To realise the promise of AI and avoid the risk we need to govern this technology. There’s no other way around it in my view; it must be governed.”
America is at the forefront of AI technology, and even the AI creators have been seeking regulation from the government. ChatGPT, an AI device, which would write out reports, stories, poems, research proposals, which broke into the public sphere last November, created ripples of excitement and apprehension. ChatGPT-maker, OpenAI co-founders Greg Brockman and Ilya Sutskever and CEO Sam Altman, in a statement in March this year, displayed on the company’s website, argued that there is need for an international regulator on the lines of International Atomic Energy Agency (IAEA), drawing parallels between AI and nuclear energy. President Biden, through his order and his statement, has shown the determination of the American government to set up the necessary safeguards.
This is but the beginning of a long road. What should be the safety standards, and their rules and sub-rules is itself a complicated exercise. The legislature has to frame the law and pass it too. Most lawmakers would have differing ideas, and there will be a clash of views and interests. And the private industry has already expressed its anxiety that the rules would not be used to harass industry. The AI industry would not want to be penalised if the technology that it is creating is misused by others.
The AI industry members do not want to be held responsible. The question that arises is whether the industry would be able to put in the necessary safeguards so that there could be no misuse of it by mischief-mongers in the first place. But this could hamper the free development of technology because no one knows beforehand the many ways it can be used, and those usages would evolve in the hands of the users.
It is pretty complicated. Regulation will be necessary and it will be helpful. The question goes beyond that. Will it be necessary to put a self-restraint clause on the evolution of technology, when it reaches a point and it seems that it is getting out of control? This is to avoid the ancient sin of hubris, of getting beyond one’s capacity and losing control over one’s own invention.
It is interesting that the ChatGPT-makers had reached the point and they have asked for regulation on their own instead of other people or the government insisting on the need for it. Altman in his testimony to the US Congress in March had expressed the view that AI could “cause significant harm to the world”. That is indeed a dire warning coming from a significant AI developer.
It has also been pointed out that the costs of developing AI would decrease and that those who can develop will increase hugely. It is this fact that the technology would allow anyone or everyone to customise AI that if were to fall into the wrong hands, then the harm it could unleash would be enormous.
It is not surprising that British Prime Minister Rishi Sunak has convened a conference on AI in London to discuss the pros and cons of AI. It is quite interesting that AI is causing so much concern all around when everyone is concerned that this is just the beginning of the huge potential, for good and for bad, of AI.