Let loose on an unprepared world in November 2022, ChatGPT quickly showed itself to be a stunning advance in the field of artificial intelligence. Indeed, the Large Language Model proved such a good conversationalist that many were left with the distinct impression the world had crossed a major threshold: true AI had finally emerged from the pages and screens of science fiction into reality.
The prospect of an AI-dominated world tends to recall sci-fi’s many visions—both utopian and dystopian—of machine intelligence. It’s interesting, and perhaps instructive, to compare some of the earlier imaginings of novelists and filmmakers with the emerging reality: to consider which aspects of those visions are proving prophetic, which chimerical.
The familiar science fiction scenario where the human cannot, or can barely, be distinguished from its simulacrum has already materialized. With ChatGPT and its rival LLMs freely available, educators across the globe now find themselves in a confounding situation, the old issue of students submitting suspiciously flawless work having taken on a whole new dimension thanks to the new technology. Learners have of course been attempting to pass off others’ efforts as their own as long as education has existed. However, LLMs represent a huge upgrade to every cheat’s toolbox. Required to scrutinize the fruits of their students’ labor for signs of AI text generation, teachers in stricter and more traditional educational settings are beginning to resemble the bounty hunters in Blade Runner, Ridley Scott’s loose 1982 adaptation of Philip K. Dick’s Do Androids Dream of Electric Sheep?. Those “blade runners”, let’s recall, are tasked with discerning dangerous artificial humans (“replicants”) from real ones by means of a “Voight-Kampff machine”, an apparatus for detecting non-human responses to questions.
The familiar science fiction scenario where the human cannot, or can barely, be distinguished from its simulacrum has already materialized.
Our educators have their own Voight-Kampff machine; it goes by the name of Turnitin. Originally a plagiarism-detecting tool, in recent years this software has gained the ability to identify AI-generated content. As in Blade Runner, though, technological progress is making detection more difficult—reportedly, Turnitin’s reliability in this particular area is currently less than stellar.
By no means all educators are taking a hard line on AI; some are radically modifying their courses to incorporate it. Whichever approach becomes standard in future, we should probably brace ourselves for a precipitous drop in standards of literacy and numeracy.
Interestingly, science fiction has anticipated this kind of deterioration. E.M. Forster’s uncannily prescient The Machine Stops (1909) imagines a future in which human beings live isolated and underground in hi-tech rooms where an apparatus called the Machine provides for their every need. As contact with others is mostly made through a kind of Edwardian-era Microsoft Teams, a fear of venturing out into the world—to travel by underground train and airship—has become common. When it comes, the sudden breakdown of the all-important Machine unsurprisingly brings death and destruction.
E.M. Forster’s The Machine Stops (1909) imagines a future in which human beings live isolated in hi-tech rooms where an apparatus called the Machine provides for their every need.
Even more on the nose is Isaac Asimov’s droll 1958 story The Feeling of Power. This brief tale posits a future civilization for which mathematics is a lost science, a body of knowledge possessed only by the computers entrusted with all calculations. A technician creates a stir with the revelation that, by dismantling and studying old computers, he has taught himself how to do elementary math with pen and paper. In another amusing parallel with our time, each of the senior figures whom the technician meets carries an individually styled pocket computer. However this society’s rulers clearly value their own individuality much more than that of the ordinary men and women beneath them, given how one general suggests putting math-trained men inside missiles as a cheap alternative to computer-guided weapons: “After all, a man is much more disposable than a computer”. One is reminded of comments Yuval Noah Harari made last year about AI begetting a new class of useless people. Relatedly, Harari has warned of the danger of an “algorithmic takeover”. If most decisions come to be entrusted to AI—the setting of national interest rates, for instance—we risk finding ourselves in a world where we simply don’t understand why particular decisions were made, the processes involved being definitionally opaque.
“After all, a man is much more disposable than a computer”. (Isaac Asimov, The Feeling of Power, 1958).
Another Asimov fiction from 1958, The Last Question, was cited by the Hugo Award-winner as his favourite among his dozens of short stories; it’s also his most anthologized. Offering a more optimistic and earnest take on AI than The Feeling of Power, it’s one of several tales about the sentient supercomputer Multivac. In its initial, 21st century form, Multivac is a massive machine, with a “face” many miles across. This mid-century assumption that the AIs of the future would be physically vast seems quaint to us—although the likes of ChatGPT in reality run on considerable quantities of hardware, in our imaginations they have no physical presence, no “body”. For us, these AIs exist only in the nowhere space of the internet. But, at least in The Last Question, Asimov did not anticipate the net. In his 21st century, the massive Multivac has to be physically attended by men. Looking to settle an argument one day, two attendants put to Multivac the half-serious question, “How can the net amount of entropy of the universe be massively decreased?”. In contrast to our LLMs, which notoriously prefer to make something up than admit their ignorance, Asimov’s computer says it has insufficient data to answer the question. Over the following eons, as Multivac evolves into increasingly abstract and elusive forms, the question is asked again and again, and on all but the final occasion receives the same response. Meanwhile, humankind busies itself creating new stars, engaged in a losing battle with the heat death of the universe.
In contrast to our LLMs, which notoriously prefer to make something up than admit their ignorance, Asimov’s computer says it has insufficient data to answer the question.
The Last Question’s vision of a computer bringing about the end of history is echoed today in the notion of “The Singularity”—a theoretical point in the future when artificial intelligence surpasses human intelligence, leading to rapid, uncontrollable technological growth. Futurist Ray Kurzweil looks forward to this as a chance to realize immortality through human-AI integration (which, interestingly, is envisioned in The Last Question). But this utopian view seems to persuade few outside the tech industry. Arguably the apprehension of philosopher Nick Bostrom—who fears superintelligent AI as a threat to our very existence—is shared by a far greater portion of the general public.