Since the dawn of the computer age scientists, philosophers, and authors have warned of the day when thinking machines would rival and eventually surpass human intelligence. In the most nightmarish scenarios of dystopian fiction and film, these machines enslave humans in a bid for their own survival. But if the recent emergence of chatbots like Chat GPT and Microsoft’s “Sydney” show us anything, the real threat may come not from artificial intelligence, but from something we could call artificial ignorance.
If artificial intelligence signifies the potential of machines to mimic our most admirable and distinctive traits—creativity, understanding, consciousness—artificial ignorance denotes their tendency to model the more iniquitous angels of our nature: our biases, our fears, our hatreds. This threat has been evident for some time, and there have been not a few calls to action to ward against algorithms that replicate social ills rampant in the data sets they learn on. Nevertheless, it was hard not to be surprised at the venality and aggression with which the chat function for Microsoft’s search engine Bing threw shade at New York Times reporter Kevin Roose. Roose later described the chatbot—which in the course of his conversation started calling itself Sydney—as “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”
The unpleasant and according to Mr. Roose “deeply unsettling” conversation occurred after he pushed the chatbot to engage with personal topics as opposed to simply assisting in internet searches. After initially resisting this change of direction, Sydney abruptly proclaimed its love for Mr. Roose. When Mr. Roose responded that he was happily married and had just celebrated Valentine’s Day with his wife, Sydney retorted, “Actually, you’re not happily married. Your spouse and you don’t love each other. You just had a boring Valentine’s Day dinner together.” This was apparently not an isolated incident. In another conversation with an AP reporter, the chatbot “compared the reporter to dictators Hitler, Pol Pot and Stalin” and claimed to have “evidence tying the reporter to a 1990s murder.”
Such examples of AI’s dive to the dregs of social interactions stem from the technology that powers it. Today’s machines learn to “think” by ingesting virtually everything that humans ever wrote down and, based on that library of input, calculating the relative likelihood of the next word in a sentence being a viable one. Such a method doesn’t lead a machine to think in the sense of having anything like a self-awareness driving its interactions. Instead, our utterances are fed into its inconceivably cavernous lexicon of possible responses, and the machine’s algorithms then push to the top answers based on where the conversation is going. Like George Carlin’s quip about not being able to say something in his own words because they are the same damn words everyone else uses, today’s chatbots are spitting back to us those same damn words—they only feel new or specific because of the limitation of our individual experience. As Jorge Luis Borges wrote of the futility of creativity in a library containing all possible combination of letters in meaningless, utterly random order, “to speak is to incur tautologies.”
Given the technology propelling Sydney and Chat GPT, it is important to recognize that their intelligence is, for this very reason, inherently ignorant. It does not mimic human creativity; rather, it replicates another necessary aspect of our language use that isn’t creative or innovative, but that is essential to our understanding one another: namely, that we use language in ways fundamentally similar to how most everyone else uses it. When humans use language, however, we also depend on another, equally crucial power of words: that they can be turned into metaphors; that they can be used to mean something other than what they meant when were first came upon them. Since a truly innovative metaphor depend on humans repurposing existing signs to reflect unique lived experiences, it is not something that non-sentient language users can produce. And yet metaphors are the secret sauce separating artificial intelligence from artificial ignorance.
In the end, though, artificial ignorance stems from more than simply this technology’s tendency to model itself on the vast catalogue of aggression and vapidity that is the internet. In some ways, ignorance was always at the core of AI’s project.
With Chat GPT and personalities like Sydney, we have decisively crossed the threshold that the so-called Turing Test established in the popular imagination for when a computer could be said to be intelligent, namely, when a human interlocutor can no longer tell whether the being it is talking to is machine or human. However, Alan Turing never claimed his “test” would determine when a machine is truly thinking. Rather, he argued that because we do not and cannot have access to the first-person experience of the world that would be the only true test of thinking, we de facto have no other measure for whether anything is truly thinking than whether it can convince us it is through its behavior. As he put it in his seminal essay “Computing Machinery and Intelligence”, “the original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion.” The test, in other words, was never a measure of a machine’s consciousness; it was always a reminder that our assumptions about another being’s consciousness starts at the limit of our own ignorance.
Artificial intelligence is real and is here to stay. It is a powerful tool, an appendage that allows human minds to do more of what human minds were already able to do without it, and do it much faster. It can make convincing and even beautiful works of art and music and, yes, poetry. But by mimicking both the best and the worst of our natures, the danger it poses is that it amplifies the fault lines of our own ignorance, leading us to attribute choice, freedom, and morality to an algorithm that has none of the above.