By Dubai Abulhoul
It was a clear and brisk Thursday night in Flat Rock, Michigan, when Robert Williams, a 25-year-old lineworker, made his way to the Ford Motor Company’s assembly plant to begin his night shift. Williams was tasked with overseeing a robot that was built to retrieve parts from a nearby storage facility, and it so happened that on that night, the entire system shut down. Williams decided to take matters into his own hands, and to perform the task manually. However, the robot slowly began to operate, and proceeded to smash the back of Williams’ head for thirty consecutive minutes. By the time his co-workers found him, it was already too late.
Williams’ death marked the first time in history where a robot was held responsible for the murder of a human being, but it certainly was not the last. A 22-year-old contractor who worked at one of Volkswagen’s plants near Frankfurt endured a similar fate, when a stationary robot violently crushed him against a sharp metal board. While those incidents have fortunately remained few and far between, I could not help but reflect on them as I read through Max Tegmark’s Life 3.0, which explores the meaning of being human in the age of artificial intelligence. Many thought leaders have attempted to explore the nuances that shape the public debate surrounding the future of artificial intelligence, but Tegmark takes on the task of addressing quite an existential question in his book: what will happen when humans are no longer the smartest species on the planet?
In recent years, thought leaders have generally grouped themselves into one of three camps in regard to their positions on this topic: the digital utopians, who believe that transitioning minds to a fully digital realm is in fact the next step in the cosmic revolution, the techno sceptics, who agree with digital utopians but believe that a shift to a fully machine-run universe will not happen for at least a few more centuries, and the beneficial AI movement, which advocates for dramatically increasing AI-safety research long before the scientific community entertains any form of existential future outcome. Tegmark makes it clear that the emergence of an AI-centred world is no longer a hypothetical scenario. It is a matter of when, not if, it will become a reality, and the journey towards that world will neither be straight, nor binary.
We have plenty of literature on how our collective future will look like if the machines fully take over, but we have little analysis in comparison on how the individual decisions we make today will shape that outcome. This is an area that remains inadequately debated and discussed, but also increasingly harder for us to ignore going forward.
Tegmark’s thought-provoking book concludes on a positive note, but I believe that Yuval Harrari’s ending in Sapiens more accurately describes what’s truly at stake in the coming few years and decades. Harrari explains that – in light of the direction scientists are now taking – the hardest question that we will face with emergence of technology and artificial intelligence will not be ‘What do we want to become?’, but ‘What do we want to want?’. He goes on to say that those who are not spooked by this question probably haven’t given it enough thought.
I fully agree.
The author is an Emirati novelist-writer, and a Rhodes Scholar.
Twitter: @DubaiAbulhoul