Edward Said, drawing on Mathew Arnold’s work, described culture as “each society’s reservoir of the best that has been known or thought”. While anthropologists have failed to agree on a unanimous definition of culture, there’s been significantly less speculation, if any at all, surrounding the notion that human beings were its primary creators. What happens when the best of what has been known or thought is no longer exclusively attributed to the human mind, and instead to an AI’s machine learning capabilities?
The McKinsey Global Institute estimates that by 2030, up to 800 million people will be at risk of losing their jobs due to robotic automation. When I first read the report I told myself, with just enough arrogance to mask my discomfort, that people in my profession will never be replaced by robots.
We are writers, I thought. Producers of culture. Preservers of the written word. Upholders of the truth. How could we possibly be replaced? And with what?
It turns out that writing, as well as other creative professions, will not only be at risk of robotic interference, but in some labs around the world, basic trials have already succeeded at doing just that. Hitoshi Matsubara, a computer professor from the Future University Hakodate in Japan, led a collaborative project between human authors and an AI program to produce a short form novel. Matsubara selected a number of words and sentences, and created parameters for the AI program to literally ‘write’ the novel on its own. The novel, titled The Day a Computer Writes a Novel, even made it through the first round of the Nikkei Hoshi Shinichi Literary Award, a national literary prize in Japan.
We tend to associate robotic automation with highly repetitive, predictable, and structured tasks that require minimal creative, social, or emotional intelligence. While it is true that such jobs face a higher risk of automation, creative professions are not necessarily exempt from sharing a similar fate, not if artificial intelligence picks up on their tasks, but when it does. Before fully lamenting over the impending death of literature and the arts, it is important to first contextualise how artificial intelligence actually works.
The basic concept behind machine learning, a subset of artificial intelligence, is to give computers access to data that the programs will then “learn” independently. The learning process is enabled by a system of probability, through which the computers make predictions with a degree of certainty that gets filtered through feedback loops. Based on how decisions are programmed as being either “right” or “wrong”, the computers then make up for their miscalculations by not repeating them in the future. In short, machine learning allows AI to work with the data that it is fed, and learns from its own mistakes as it goes along.
On the one hand, while Matsubara’s team succeeded at producing an AI written novel, the end product itself was neither an international bestseller, nor a significant, meaningful work of literature. The AI machine was able, with the help of its human creators, to write a novel, but it did not succeed at giving it meaning per se. The need to express, the need to inject meaning into a body of text is what perhaps gives literature its universal, and human, appeal. On the other hand, since AI can only work with what it is fed by its scientists, doesn’t its learning process sound eerily similar to that of the human experience itself? Don’t we, too, attempt to derive meaning from the different social, historical, and contextual data that we have been exposed to over the years?
As children, we still had to learn certain rules in order to write. We first wrote letters, then words, then sentences, then paragraphs, and so on. We, just like AI programs, learn through feedback loops. The question then becomes more about how scientists will be able to quantify language, as opposed to if it can be quantified in the first place. It would be an extremely complex endeavour, but not an impossible one. One could even argue that the difference between an AI program and the human mind is that, unlike the former, the latter does not always learn from its mistakes in the same calculated, predictable, and efficient manner.
The future disruption of writing at the robotic hands of AI devices poses intriguing, and largely unaddressed, consequences not only for authors, but also for publishing industries at large. Will publishers issue different contracts for novels that have been written with the help of an AI assistant? Will poets be allowed to use AI programs to produce a list of rhymes that best fit their stanzas? Who will get credit for writing the poem, the scientist that will choose the best information to feed the machine, or the machine that will produce the final body of text? In the case of a nomination or a win, who will the Nobel Prize of Literature be awarded to? Will writing produced with the help of AI programs even be considered “literature”?
The potential scenarios are endless, and are only limited to how far our imaginations, and our scientists, are willing to take us. The possibilities, however, force us to confront, with a somewhat unprecedented degree of urgency, two key questions surrounding our collective, albeit slightly existential, future as writers, artists, and all-around creatives: what will make our work uniquely human in the future, and will that distinction actually matter?