Despite concerns – ethical and economic – that ChatGPT has generated, there is a proliferation of AI startups in Silicon Valley to create more sophisticated versions of Amazon’s Alexa, Apple’s Siri and Google’s Assistant. The writers in Hollywood had gone on strike on use of ChatGPT, and Hollywood actors have joined the protest. But in Silicon Valley there is forever feverish activity to push the frontiers of technology, and AI is the flavour of the season.
The ChatGPT technology is to form the basis for creating AI agents and the technical entrepreneurs are talking about creating agents which will serve as human supervision called co-pilots. But all those who are looking to commercially exploit the possibilities of AI agents accept upfront that the AI agents are much too simple, even primitive, compared to the human brain, and the AI agents can never hope to execute only simple tasks.
CEO of General Intelligence, Kanjun Qiu, a competitor of Open AI, creator of ChatGPT, says, “Lots of what’s easy for people is still incredibly hard for computers.” But many of those investing in the new technology think that AI agents can be made to do much useful work like booking flight tickets, ordering a burger or even send emails for a conference. Kajun explains the challenge for completing the last task: “Say your boss needs you to schedule a meeting with a group of important clients. That involves reasoning skills that are complex for AI – it needs to get everyone’s preferences, resolve conflicts, all while maintaining the careful touch needed when working with clients.”
The new investors and startups are looking to challenge and beat the established players like Microsoft and Google. Inflection AI has raised $13 billion in June, with the goal of creating an AI agent that can perform complex tasks. But Open AI is on the run and it has released an unpaged ChatGPT, called GPT-4, which is endowed with strategic and adaptable thinking.
But it is not all rosy future. There are issues of ethics, and even of imagination gone wrong. Yeshua Bengio, considered the ‘godfather of AI,’ is fearful that the AI agents could start to act on their own and do unexpected things. He says, “Without a human in the loop that checks every action to see if it’s not dangerous, we might end up with actions that are criminal or could harm people.”
At the back of everyone who is looking at the evolution of ChatGPT is the awareness of HAL 9000, the diabolical computer, that kills off the cosmonauts aboard the spaceship in ‘2001: A Space Odyssey’, a sci-fi novel written by Arthur C Clarke and the movie directed by Stanley Kubrick in 1968. An anonymous creator had posted online an AI construct called ‘ChaosGPT’ with the instructions ‘Destroy humanity’ and ‘Attain immortality’. So the potential for destructive misuse of AI exists, even as DNA sequencing can end up creating monsters in the biological sphere.
It is no wonder that there is demand for regulation of AI, even from the creator of ChatGPT, Sam Altman. Science and technology are double-edged swords, which can bring tremendous good but they are also capable of unleashing uncontrollable destructive forces as well. It is the moral sense of human beings that should guide the use of technology, and it should be mandated that anything that harms human beings needs to be shot off even as nuclear reactors are shut off once the malfunctioning begins. Yes, the anti-viruses in software systems could serve as a model to check rogue AI developments. More than anything else, what AI needs is human supervision.