User Tools

Site Tools



Harry G. Bartholomew and Maynard D. McFarlane were already digitizing images in the 1920s. Working for the newspaper industry, the British inventors developed a technology for transmitting images over long distances so that they could send rasterized photographs across the Atlantic by cable. In the 1960s and 1970s, digital imaging techniques played an increasingly large if still limited role, especially in the scientific context – for example, in radiography. But by the 1980s at the latest, the digital image began to assert itself on the screen of the personal computer. The first digital cameras appeared on the consumer market in the early 1990s, and in no time they had ousted analogue photography and transformed it into a niche product. While Western cinema and television continue to be gradually digitized in the early twenty-first century, the computer has long since prevailed as the new “master medium” (Zielinski, Audiovisions, 1999, p. 8).

What differentiates the digital image from its predecessors is, above all, the way it is produced, because it is generative, meaning that it is generated from digital code. Such code can be losslessly reproduced and modified at will, and with the appropriate technologies it can be transmitted at the speed of light. Whereas the “old” analogue images are, according to Vilém Flusser, merely abstractions of the already existing (the “probable”), the digital images that were “new” in his day enable us to project the nonexistent (the “improbable”). The resulting constellations of points are merely semblances of surfaces, since they possess no “base.” But because they also “purport to correspond,” with mathematical precision, “point for point to the world outside” (Lob der Oberflächlichkeit, 1993, p. 55; translated from the German), they are seductive and not infrequently deceptive.

Original article by Clemens Jahn

You could leave a comment if you were logged in.
surface.txt · Last modified: 2021/11/05 17:47 by