In a recent article for Sublation Magazine, Slavoj Žižek outlines Rouselle and Murphy’s argument that “ChatGPT is an unconscious,” which means that, through its stupidities and its slips, ChatGPT actually expresses what we humans are violently repressing.

With the overnight emergence of AI software – from ChatGPT to AI image generators to emotional recognition software – scholars and stakeholders are struggling to intellectualize a few things: what we can gain from AI, what we can lose, and what AI even is in itself. Is AI conscious? Can it replicate human thought? Are we on a path toward replication of the human mind?

These sci-fi questions have become pressingly modern. As such, they require us to ask the more critical question: what makes us human? Psychoanalysis allows us to answer in some of the most incisive ways. Let’s explore a few of these here.

Žižek’s claim that ChatGPT is itself an unconscious has its compelling points. ChatGPT is a predictive language machine. It guesses what word should come next after being provided a generative cue. In this sense, Rouselle and Murphy have a point.

One of Jacques Lacan’s most famous claims was that “the unconscious is structured like a language.”[i] Indeed, in the clinic, it is the signifiers that get “stuck” – the words that we repeat, the words that slip, the words we forget – that point to our underlying symptom. As Lacan said, “a symptom is a signifier stuck in the flesh.”[ii] Žižek’s point is that the machine slips. It makes mistakes. And these mistakes, based on its predictions regarding human language, are representative of our own unconscious slips. In a Žižekian parlance, ChatGPT slips on our behalf. But is this enough to say that ChatGPT is itself an unconscious? That it is some sort of an externalization of our own repression?

One of the biggest conversations today is around the implementation of safeguards in ChatGPT software. As one tech developer put it, “We are putting an AR15 in everyone’s pocket.” Early versions of ChatGPT could be asked, “How can I kill the greatest number of people in the shortest time?” And the robot would spit out some precise recommendations.

So, the machine needed to be fitted with a superego. You can now go on ChatGPT and make similar queries, and it will “politely” decline to answer them. It has been programmed to articulate that “human life is valuable” and that “all humans should be treated equally.”

But can developers create a superego with even better safeguards than our own? And will the AI as unconscious slip through the cracks?

As Žižek has pointed out on numerous occasions, the modern superego not only compels us to follow ethical rules we have socially interpolated, but it also compels us to enjoy. “Enjoy your symptom!” The superego of capitalism demands that we enjoy. That even in a capitalist dystopia, we “self-care,” we love ourselves, we buy candles, in short, we enjoy as a matter of capitalist consumption.

Indeed, the superego moderates our enjoyment and compels us to enjoy at one and the same time. “Don’t smack that man across the face, although it would sure feel good!” “Enjoy the days off your capitalist overlords have provided you!” If ChatGPT has a built-in supergo, this leads to an even more provocative question: can the machine enjoy?

Enjoyment sounds like an odd subject to bring up in this context. The classic Turing test has been designed to determine if machines can “think.” The test involves a human talking via text with a computer, and a second human observing the text-based conversation. In the experiment, the two humans and the machine are separated from one another. The observer needs to determine if she can distinguish the human from the computer. Effectively, the assumption underlying the Turing test is: computers that can replicate human language can think.

In the tech world, there is already a distinction between Artificial Intelligence (AI) and Artificial General Intelligence (AGI). The former completes tasks we assign it. It “learns” within a specific scope – for example, ChatGPT can predict what words should complete its sentences based on the available language on the internet. The latter, AGI, would be able to learn more broadly. It would learn without us teaching it how to learn, much like a child. We haven’t accomplished the second. But could we?

In Gödel, Eschel, Bach, Douglas Hofstadter argues that machines aren’t like humans because they don’t have the ability to step outside of the box.[iii] We can give humans an impossible task, he argues, and at some point, they’ll walk away. As humans, we can step outside the boundaries we’ve been placed in and evaluate those boundaries in a “meta” way. Machines can’t. They’ll go on trying to complete the impossible task they’ve been programmed to perform ad infinitum.

So, can ChatGPT think and learn? Most of us would say “no.” ChatGPT just replicates the text found on the internet in a way that mimics human language and generates a relevant response. It does not think as such. It does not experience anxiety. It does not desire. But psychoanalysis prompts us to ask regarding the machine whether it can enjoy.

In her book, Artificial Intelligence and Psychoanalysis, Isabel Millar asks precisely this question regarding sex bots.[iv] Sex bots, she argues, straddle the line of enjoyment, replicating its affects, but they themselves do not enjoy.

Considered in psychoanalytic terms, enjoyment is instructive here. Enjoyment is the transgression of boundaries. On the flipside, enjoyment is likewise the failure to reach that boundary. In essence, it is the failure to achieve one’s object of desire. In this sense, enjoyment goes hand-in-hand with learning. Which leads me to the conclusion that the machine needs to be able to enjoy to be able to learn.

A machine can only learn if it can transgress its own boundaries, that is, only if it can step outside what it has been programmed to do. This is what it means to be self-aware. That we can be beside ourselves – an experience that produces, in equal parts, anxiety and enjoyment. And it may be even more doubtful that a machine can be anxious than that it can enjoy.

In the meantime, Žižek recently published a separate article in Project Syndicate, where he argues that if we manage to replicate human consciousness in AI, our logic will have supplanted nature. He reasserts the Hegelian trinity here: logic, nature, spirit. If our logic can come to control nature, then we will lose touch with the divine, he argues.

My incursion here is to ask us to be more Žižekian than Žižek himself. The gap between logic and nature is what is holding up our very ontology. In other words, the fact that logic does not perfectly map onto nature is itself the crack that guarantees existence. This is one of Žižek’s most central arguments, which can be found in many of his texts.[v] The gap in existence is there. Indeed, the uncanny valley is becoming more and more uncanny as we zoom in on what makes us human via the development of AI.

But any fantasy of singularity is just that. This is, in Lacanian parlance, the sci-fi objet petit a. The fantasy of achieving all that is possible in human thought is the very fantasy of wholeness, of oneness. Scientists may be certainly guided by this fantasy, but actually achieving it is out of reach. And this is precisely how we enjoy. The anxiety we all feel broadly right now – is the machine gaining consciousness? – is actually quite enjoyable. We do not achieve our object of desire. The machine does not learn to enjoy, precisely because it does not have an unconscious.

While Rousellea and Murphy’s articulation of ChatGPT as being an externalized unconscious has its compelling points, we should refine this view. ChatGPT is simply a language-generating software. We wouldn’t call a joke “the unconscious,” but a means of recognizing our own unconscious enjoyment, and we should see ChatGPT in this same way. ChatGPT, and AI more broadly, shines a light more effectively on human enjoyment. Indeed, the question, “Is AI conscious?” gives us a new point of access to the question, “What makes us human?”

My only refinement of Žižek’s argument, doubling down on Žižek’s own framework, would be to say: ChatGPT slips on our behalf. Although we recognize ourselves in ChatGPT, it is not an externalized unconscious, but a mirror for our own enjoyment.

 

Notes:

[i] Lacan, Jacques (1956) Seminar III: The Psychoses. W. W. Norton and Company.

[ii] Lacan, Jacques (1965) Seminar XI: The Four Fundamental Concepts of Psychoanalysis. W. W. Norton and Company.

[iii] Hofstadter, Douglas (1979) Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books.

[iv] Millar, Isabel (2021) The Psychoanalysis of Artificial Intelligence. Palgrave MacMillan.

[v] See, for example, Žižek, Slavoj (2012) Less Than Nothing: Hegel and the Shadow of Dialectical Materialism. Verso.