User Tools

Site Tools


piece_artificial_intelligence:flusser-gpt_and_the_crisis_of_linearity

A Piece of Artificial Intelligence

Flusser-GPT and the Crisis of Linearity (2023)

Baruch Gottlieb

The recent spate of developments in so-called machine learning only reaffirm Flusser’s contention that causal, linear thinking will never be completely superseded in the post-historical universe of technical images, and that future philosophy will have to oscillate inside and outside the universe of technical images. Every computational apparatus functions on mechanical linear processes, routines, loops, decision trees, which, though ramified, always proceed with historical causality towards the result. What Flusser pointed to as a “post-historical” condition of thinking through technical images, is at the same time a “most-historical” condition of myriad ramified linear processes running sequentially near the speed of light

Flusser wavers in his writing between affirming the power of thinking through technical images “Einbildungskraft” and admitting that linear writing is still fundamental for critical thinking. Flusser’s revolutionary expectation is that his readers will come to develop new cultural techniques of thinking critically through technical images, a sort of second-order criticality which integrates the legacy of mechanistic scientific thinking. This kind of thinking can only be adequate to its task of producing negentropic “new information “ to the degree that it defies the program encoded in the apparatus, which is the literate legacy of linear causal thinking i.e. writing.

Writing produces causal, linear thinking in the reader, historical consciousness, a notion of private interpretation and public politics. As we enter the universe of technical images, we lose the priority to develop our own point of view and begin to “dance around events” in order to get ”as many viewpoints as possible”.

“[…] image-makers do not think, they cannot think. Thinking is anti-image. Now they dance around [the event] and by dancing, by collecting points of view, they destroy ideology, which is the insistence on one point of view.”

- Vilém Flusser, “Television Image and Political Space in the Light of the Romanian Revolution,” (lecture, Budapest, April 7, 1990) in Miklós Peternák et al, We shall survive in the memory of others (Köln: Verlag der Buchhandlung Walther König, 2010), 24min30s


The situation is contradictory since technical images, and electronic information are themselves products of the scientific tradition of causal historical thinking. The new ambient post-historical experiences in the new Universe still have to be understood as subtended by and dependent on causal thinking.


Flusser has a double suggestion for how to go about understanding, or challenging the logic of technical images. On one hand, he presents the apparatus as an inscrutable and impenetrable “black box” which is best understood by playing with and against it, using it for purposes it was not designed for, and heuristically, gradually identifying patterns of behaviours which will allow us to escape its program. On the other hand, Flusser suggests we may be able to reprogram the apparatus by learning the codes with which it is programmed. So while the former approach is performative, dialogical and intuitive, the latter is a continuation of literate, causal analysis and critique.


In our Flusserian analysis of LLMs and other machine learning applications which are becoming more widely used in everyday life, our challenge was first to establish that behind the apparent paradigm shift in cultural practices of communication, there persists, in the programming of the apparatus, the linear causal historical structure of thinking of the previous age. The new realm of all at once information, is thus both truly new and at the same time radically conventional.


The T of GPT stands for transformer. A transformer is a program that analyses patterns, contextual likelihoods. In other words, the transformer develops a kind of predictive map of what is likely to be in the neighbourhood of any data point. The transformers in Large Language Models study language by searching forward and backward from various words in a sentence. Comparing perspectives from various places in a text they can build very accurate models of sentence and argument formation. However, for Flusserian analysis it is important to note that the “map” of likelihoods is always traversed, or read, linearly when used for a prediction. Linearity is not transcended, only ramified.


The P in GPT stands for pre-trained. Because building an adequate model of language use requires a massive dataset of sample text as well as extensive computation of the transformer, LLM applications do not need to start from scratch if they use a pre-trained model. Here comes a lot of concerns about what goes into the model and what is left out. For example Wikipedia was one of the main language sources used to train OpenAI’s GPT. Anyone who has tried to edit Wikipedia will know that its WP:RS (Reliable Sources) rule on contributions produces a bias towards “mainstream”, in other words, corporate media interpretations of events. If we consider US corporate media an important “apparatus” we must play against using GPTs, the problem is significantly intensified by the fact that the reality reproduced by GPT is pretrained on Wikipedia. Let's remember here that we are deeply in a world of texts, texts of articles, and emails and chats which make up the pre-training corpus, and texts of the prompts by which we call a GPT to generate a response.

The linear nature of the programming responsible for the impressive GPT LLM results confirms that it is a machine like any other, and that the culture objects it produces are the result of an industrial process. In this sense, GPT is not different from a platform like Facebook: it matters who owns the industrial “dispositif” which avails all of us of this service. For example, a paper on GPT2 states that results are censored so as not to allow GPT to generate incitements to violent action. 4 categories of dangerous incitement were identified: white supremacist, islamic fundamentalist, anarchist and socialist. Striking is that fascist texts do not seem to be a problem for OpenAI. Through experimental prompting, it is clear that there is an anti-communist tendency in this filtering.


The results of LLMs are multiply constrained, first by the content of the large dataset used to pretrain the model, and finally by a kind of editorial filtering which is imposed to protect society from some of the potential damage unleashed by the technology. Whereas dangerous materials may be purchased under legally enforced restrictions, increasingly we have a situation where the tools we use are programmatically prevented from being used in various ways. This is not just the case with LLMs, which also include rather puritanical limits on sexuality and eroticism, but increasingly with other software and even hardware, which monitor users and arrest behaviours designated as inappropriate, such as cars, which attempt to prevent owners from driving while intoxicated.


When law is encoded directly into the software, we are not talking about post-linearity in any kind of operative way. But the way the law is produced, though, in principle, through the process of rational linear arguments, is, in practice, an exercise of arbitrary power and force. As before, linearity, the mechanism, is simply a tool to augment the exercise of power of an individual or group. So we will have built-in AI copyright enforcement, morality enforcement, and policing which serves to repress and constrain behaviours considered undesirable. Certainly this will produce sub-cultures which resist this control but these are always overdetermined by the logic of control and cannot produce radically alternative outcomes.(Audre Lorde’s Master’s tools etc. )


Flusser implied with his suggestion to treat the apparatus like a black box, that it doesn’t really matter how the apparatus works as much as it matters who owns and controls it. Any apparatus can be used for good or ill, what determines the real outcome is a complex set of social checks and balances, prevailed over by the rich and powerful. As Taoism teaches, a decision does not necessarily follow from the preceding deliberation, for it is always an arbitrary decision to stop deliberating and decide. So linear and rational argument has always only ever been part of the story of modernity. Rationality has unleashed enormous potential in automated systems and machines, but human beings’ ability to determine the kind of society they live in has generally been subject to the needs of the wealthy. Only in the socialist vision, where the ruling class is itself a council of councils ramifying upward from the productive forces at the bottom of the apparatus reproduction chain, could the wantonness of the wealthy be subordinated to the needs of the great majority. My friend Dmytri Kleiner likes to say, “Capitalism is too stupid for AI.” AI at best reveals to us what we are, but corporate filtering prevents us from benefiting from the oracular power. Only under a scientific regime, where reality must be encountered in all its difficulty, not only through experts but by the population in general, will the emancipatory power of the revelations of AI be emancipated to produce other social consciousnesses and prospects. Under Capitalism, AI will only serve to exacerbate already existing injustice. Only under socialism will AI be emancipated to radically improve the lives of the generality.

Flusser was a Marxist as a youth. His parents and Edith’s were part of a Marxist intellectual circle in Prague. Marx, beyond his revolutionary advocacy, was considered an important thinker to read in order to understand the world. Flusser rejected Marxism after his exodus from Prague but never completely abandoned Marx, his writing remaining paradigmatic for a linear, causal, historical analysis of the world, which, according to his theory, is encoded into the functioning of the apparatus. So whereas technical images can produce new situations, they still dialectically are consigned to historical flows. When the socialist projects in Eastern Europe began to collapse in 1989, Flusser was cautious and even ominous about the prospects:


“To say that, of course, no doubt, that the apparatus of the communist party was a terribly oppressive apparatus. And those who fought it by working against, those metaphorical photographers […] used the apparatus to play against the apparatus. But now that the apparatus was destroyed, chaotic situations menace us.” – Vilém Flusser, On technical images, chance, consciousness and the individual. Interview by Miklós Peternák in München, the 17th of October 1991. In: Miklós Peternák (Hrsg.), „We shall survive in the memory of others“. Vilém Flusser, Köln 2010 (Verlag der Buchhandlung Walther König, 87min), 38min17s

From the threat of this chaos, Flusser returns, in his later writings and interviews, to studies of intersubjectivity from his youth: Buber, Husserl, and the Talmud. In an interview with Miklos Peternak and Laszlo Beke in 1991, he frames prehistory as Jewish and history as Christian, which implies that post-history would be a return of resonant, oral-culture Jewish dialogical forms of thinking. But this apparent re-socratic turn, to a world of spoken, intersubjective philosophy is now subtended by the legacy of historical linear writing in the digital apparatus which affords this in radically new ways. Flusser dares to sketch out a scenario, which today sounds like a teleconference, where interlocutors are connected through the experience of God in the other. But again this is not merely the “only permitted image of God” as it was in antiquity but a synthetic image of God criticised from within by the programming of the apparatus which provides it.

“…the only way I can imagine God is to look at the other person. This is to say that only through the love of my neighbour can I love God. […]the […] only permitted image […] is the face of the other . But, the synthetic image – computer-image - is the other person. Because through the computer-image, I can talk to the other person: he sends me his image, I work on it and send it back to him – so this is the Jewish image. This is not an idol. This is not paganism. It is a way to love my neighbour, and by loving my neighbour, to love God. So I am not a good Talmudist, but I would say that from a Talmudic point of view, the synthetic computer-image is perfectly Jewish.“ - Vilém Flusser, “On religion, memory and synthetic image,” (interview by László Beke and Miklós Peternák in Budapest, April 7, 1990) in We shall survive in the memory of others (Köln: Verlag der Buchhandlung Walther König, 2010), 13min30s


Facing the other through the linear/causal mesh of scientific analysis is, for Flusser, the messianic apotheosis, result, super-resolution of the original sin. The abstracting sin of literacy has been overcome through the synthesising technical image. Now we analyse each other while we interact with each other intuitively. The analysis is a given, it is subliminal, infrastructural, engineered into the experience, We need not be critical ourselves in the sense that we must step out of the situation to observe and analyse because that whole process is supersaturated in the texture of the image of the other we are interacting with. Rather we may dedicate our critical faculties to informing the other, or better, generating unlikely information with the other.


However, missing from this messianic return is a critique of the material ownership of the apparatus on which all this divine activity is to take place. This is not the intersubjectivity which takes place on common ground, unmediated between two humans. The radical synthesis of rationality and intuition in the technical image depends on the industrial apparatus brought forth by the global electronics production chain, with all its legacy of imperialism, colonialism, and entailing white supremacy, racism, misogyny, etc. Flusser always assumes equal agency of all parties, and so does not entertain the possibility that some may have a harder time accessing, enjoying or taking full advantage of the telematic world which is emerging. It is one thing to criticise the rational, causal, historical alphanumeric code which runs inside all computerised affordances using rational arguments, it is quite another to challenge the power structures which ensure when and how the apparatus is reproduced in order to runs its programs, a realm beyond rational critique, where all possible rational critique is subordinated to the purposes of those who prevail.

Baruch Gottlieb

December 2023 Seoul

You could leave a comment if you were logged in.
piece_artificial_intelligence/flusser-gpt_and_the_crisis_of_linearity.txt · Last modified: 2024/02/20 00:27 by baruch