ChatGPT, an AI product of OpenAI, a San Francisco-based AI company, released in late November, is described as a Large Language Model (LLM), which scours through a large data base and language substratum to come up with a new text that is smart and intelligible. When asked by a user to write a poem on sunrise, or the political situation in a country, it turns out a reasonably good piece of writing. This has naturally raised concerns all round because it could have a devastating effect on learning of students.
They will be tempted to fall back on ChatGPT to write their examinations and other assignments. It could also affect reporting and analysis of news in the media. As it stands today, ChatGPT is still at a basic learner’s level. In an experiment carried out the University of Minnesota Law Faculty, ChatGPT answered the questions set by the faculty reasonably well and earned grades of low level like C. Jon Choi, one of the law professors who had set the test for ChatGPT said, “ChatGPT struggled with the most basic classic components of law school exams, such as spotting potential legal issues and deep analysis applying legal rules to the facts of a case.” But he felt that it could be used as a first draft which then the lawyer could improve.
And he predicts that AI assistants to lawyers will become a common feature in the future. Educationists in America are quite worried about its misuse in school by students. In New York and Seattle, public schools – government-run schools in America while a public school in England is a private school – have banned the use of ChatGPT.
The challenge posed by ChatGPT is broader than that of its use and misuse in the education system. Scientific research work as presented in journal articles become a minefield if the device was used to write papers, and the reviewers at the journals are not in a position to make out whether a scientist or group of scientists penned the results of research or was it a superficial summary made by ChatGPT. Nature, the prestigious scientific journal has taken note of the issues raised by the use of ChatGPT.
In a comment piece with the headline, “ChatGPT: five priorities for research Conversational AI is a game-changer for science. Here’s how to respond”, published on Friday (February 3), written by Eva A.M. van Dis and others, said, “ Conversational AI is likely to revolutionise research practices and publishing, creating both opportunities and concerns.” The issue of concern is, “…it could also degrade the quality and transparency of research and fundamentally alter our autonomy as human researchers. ChatGPT and other LLMs produce text that is convincing, but often wrong, so their use can distort scientific facts and spread misinformation.” It is a system that can turn rogue and can cause havoc.
But most governments and industries are likely to promote AI forms like ChatGPT because it seems to simplify processes and seems capable of achieving more in ever shorter time. But the temptation perhaps needs to be resisted. It would of course by defenders of AI, and of ChatGPT in particular, that it is a useful device and there is no need to raise the alarm that it will take away the moral responsibility of the human being engaged in work and in thinking. The apologists of AI will also say that it is unrealistic to throw out the baby with the bathwater as it were because AI is the most useful tool and it should be used for increasing productivity and efficiency. They will say every technological revolution changed resistance from no-changers, and that the critics of AI will learn to accept AI and even use it.