In July 2022, the Midjourney platform was still at an embryonic stage and only accessible to few. Back then, I wrote an article (or, rather, two articles) on the topic. My thesis was that in the future, when anyone could falsify any image in nearly perfect fashion and in vast quantities, the testimonial value of a photograph – already in a state of crisis due to the development of computer graphics – would be nullified. The future I spoke of arrived a mere eight months later, as was shown by the dramatic proliferation of the now infamous non-photos (or rather synthographyes) of the Pope with the down jacket and of Trump under arrest. Now that this has become common knowledge, panic sets in: we’ll be inundated with fake news, deep fakes, and so on, as if we haven’t been already.

In attempting to analyze these new tools, I have often used the metaphor of the birth of photography, specifying, however, that there is an area in which these two technologies differ greatly from each other. While the testimonial value arises with photography, it dies definitively with TTI. It’s not immediately clear whether this is good or bad news; after all, human societies have managed without the testimonial value of images for millennia. However, the world is not as it was in the past, and we must place this event in the context of total interconnection and globalization. In other words, we can’t just shrug it off.

Let’s start from here: this is where we’re at, and we must come to terms with it. Even the dream of the most hostile luddite is overshadowed by the fact that these technologies now exist, and even if they were banned, people could still use them illegally. Just as waking annihilates the truth value of a dream, once trust in the image has collapsed there’s no going back. Nonetheless, it is reasonable to ask whether we still had any trust in images and videos before the advent of TTI softwares. If this trust wasn’t already dead, it was in a very bad shape. This became evident both with Covid and with the conflict in Ukraine: the more falsifiable an image is, the more it is counterfeited, and the more trust in truthful testimony declines. Iconic in this regard is the episode of one of the first bombings in Ukraine illustrated on Italian TV news with a scene from the video game War Thunder – iconic also because the bombs, although not there and not in that way, were really devastating Ukrainian territory. Besides, over the last few years it has become clear that trust in the “evidence” of a fact no longer exists among the general public, regardless of the media presenting it. Trust in authority has crumbled along with trust in documents, unfortunately infecting even trust in scientific research.

In a not-too-distant future, what might happen when AI will be able to produce false but believable images and words that can be effortlessly self-modified and enhanced by knowledge of psychological patterns that are invisible to us? If this power is possessed by a few, in a scenario where AI is locked in a code, non-transparent, and monopolistic, the prediction is simple: a dystopian hell of advertising and propaganda, in which it will be difficult to distinguish between thousands of “almost human” bots. We will easily fall into the mesmerizing web of those who want to exacerbate the already profound social inequalities for their own advantage, or, in the best case, the context will be so heavy and pervasive that it will take us away from social media and perhaps from most of the internet that we know now. As Daniele Signorelli writes in Wired, commenting on the letter promoted by the Future of Life Institute that calls for a pause of at least six months on AI development,

“a (very minor) part of this appeal indeed focuses on the (concrete) risks that these machines will “flood our information channels with propaganda and falsehoods” and become powerful tools for disinformation that are easy to use. Think of the fake photo of the pope in a white puffer jacket and imagine a future in which – in texts, photos, and even videos – it will be increasingly difficult to distinguish what is true from what is false.

All of this is happening in the complete absence of any regulatory framework, inevitably increasing the risks posed by the spread of these tools. It is probably these kinds of dangers that have also led figures such as Gary Marcus (one of the most prominent critics of excessive expectations placed on deep learning) or Yoshua Bengio (winner of the Turing Award for being among the inventors of deep learning itself) to sign the letter calling for an AI moratorium.”

Setting aside some dubious claims related to the equally dubious philosophy of “long-termism,” many of the concerns raised by the Future of Life Institute are reasonable, and a six-month pause in the development of AI would undoubtedly be wise. That said, it is rather funny to see billionaires who just happen to be lagging behind in the new market race among the signatories. In addition to this, it is hardly debatable that the concern of companies about intrinsic damage to a system that has helped them prosper and has contributed to misinformation, job crises, and social inequalities for decades is quite grotesque. 

As is often the case, what is taken for granted in this letter is more interesting than what is actually said. We read: “Should we let machines flood our information channels with propaganda and untruth?” And the answer is obviously: no, regulation is necessary, but then again, it has always been. These are certainly not new phenomena; in fact, they are inherent risks of the structure of social networks and of the algorithms that govern them. I find another question in the letter extremely significant: “Should we automate away all the jobs, including the fulfilling ones?” Again, the answer is obviously in the negative. But it is worth asking why the question assumes that mechanizing some jobs defined as “fulfilling” must inevitably lead to an acceleration of exploitation. If I deliver a tractor to a farmer to enhance work in his field, he will be happy, but if I fire him because I need fewer workers with the tractor, he will be angry. The idea of a fulfilling job is a rhetorical trap. On the one hand, it supports prevalent exploitation in some sectors, such as the cultural sector: do a job you love, why would you also want to be well paid? On the other hand, it assumes that if the job is automated, the person who does it must automatically end up on the street. If the article you are reading had been written in half the time with the help of an AI software, why should I, as the author, feel defrauded? On the contrary, I could use the leftover time to read, meditate, take a walk, write something else… I could even use it to do something with an AI, for pleasure or personal research. The risk only exists if my employer demands double the productivity or half the employees, yet this is not an intrinsic danger of technology. As Andrea Colamedici and Maura Gancitano write in Ma chi me lo fa fare?, “the idea of working just to follow one’s passion often assumes that every individual has a unique and specific passion, which can be exclusive and invalidating for people who don’t have such a defined passion or who have multifocal interests and talents. Today we can’t ask work to offer us all the meaning of life.”

The idea that AI will simply fit into an unchanging society brings to mind the meme “you don’t hate AI, you hate capitalism” and the famous phrase “it’s easier to imagine the end of the world than the end of capitalism,” which applies to other forms of human association as well, as long as we find ourselves fully immersed in it. It’s curious that when a technology proves harmful in a particular social context, we don’t even consider the possibility that we should modify the social context, or, rather, that the social context itself may be riddled with problems, and the technology may be only exacerbating them.

This dystopian scenario becomes apparently more likely if, as often happens in computer science, Open Source projects cannot be stopped, and thus perfect fakes will be available to anyone. The result is easy to imagine: a proliferation of racist, homophobic, sexist, pornographic, or even just completely crazy bots, perhaps created for fun by some sufficiently nerdy teenager. 

Enforcing preventive censorship in the way that many companies (such as OpenAI and Adobe) do, apart from being severely limiting for any creative project, would not get us far once ways are discovered to bypass these technological barriers. Moreover, a tool that cannot do some things… well, it’s far less useful.

If censorship does not help us escape the dystopia, the massification of lies could paradoxically counteract it. In the hell we have described, words and images would become so depowered as to lose much of their coercive value, or even just their psychological influence; if spam becomes indistinguishable from content, everything becomes spam. How would social media survive the invasion of duplicate profiles that are indistinguishable from humans? A team of blade runners doesn’t seem possible, but it’s difficult to make credible speculations. All we can do is imagine.

Let’s imagine, then. We’re probably already tired of answering to humans who seem like bots. After the invasion of replicants, we would lose any desire to interact with people who are not, so to speak, of proven humanity, like those we have met in person and then online. One possibility is that social media will be formalized in such a way as to ask for a certification of identity from those who register, as is already the case in some instances. In this way, we would only speak to profiles with the “human badge,” unambiguously conferred by a designated entity, presumably in person, as with passports. The level of dystopia in this case decreases because it leads to a kind of accountability on the web that makes replicants unnecessary, although no one could prevent us from publishing false photos and content, of course. These might be labeled in the code as the work of a TTI, which is a great idea, but it wouldn’t work for texts written by a GPT.

Perhaps another criterion will come into play, that of social reputation. If we build a reputation as liars, few will trust us. And if we cannot create thousands of fake profiles but are, instead, bound to a digital identity, should we lie, we would do so at our own risk. Where our faith in documents dissolves, faith in people re-emerges, even though this, too, can be more or less misplaced. It is worth recalling that trust is also based on psychological patterns, and, if AIs become better than us humans at reading and implementing such patterns, then dystopia returns.

These tools confront us with the ultimate stage of post-truth, namely the impossibility of faith, a sort of scepticism to the nth degree. If I am no longer able to understand where deceit is hidden, I cannot believe in anything of which I am not myself a witness. While such a dystopia is extremely probable in the case of monopolies, even a policy of maximum openness, availability, and transparency of AI software is not without risks. However, we must bear in mind that the dangers of these technologies – like those of all others – are predominantly in their uses, though these, guiding their development, also partly determine their form. Before this form crystallizes into something too dangerous, therefore, a critical and attentive gaze is necessary which is as broad as possible and does not focus on the narrow interests of individuals.

A good example is this initiative, aimed at creating a sort of CERN of AI, a public, open, controlled, and independent project, available to all and not in the service of any company or monopoly. As stated in LAION’s statement: “We must act promptly to ensure the independence of academia and government institutions from the technological monopoly of large corporations such as Microsoft, OpenAI, and Google. Technologies like GPT-4 are too powerful and significant to be controlled exclusively by a few.” Between monopoly and openness, we should prefer the latter, which would at least allow us to fight on equal terms. In our analyses, we should always bear in mind that the enemy is not the tool but rather the person who moves it and gives it shape.

***The Italian version of the article is available here***