Images, among other sign systems, are structurally better suited for a pragmatic framework marked by continuous multiplication of choices, high efficiency, and distributed human experience. But in order to use images, the human being had to put in place a conceptual context that could support extended visual praxis. When the digital computer was invented, none of those who made it a reality knew that it would contribute to more than the mechanization of number crunching. The visionary dimension of the digital computer is not in the technology, but in the concept of a universal language, a characteristica universalis, or lingua Adamica, as Leibniz conceived it.
This is not the place to rewrite the history of the computer or the history of the languages that computers process. But the subject of visualization-presented here from the perspective of the shift from literacy to the visual-requires at least some explanation of the relation between the visual and the human use of computers. The binary number system, which Leibniz called Arithmetica Binaria (according to a manuscript fragment dated March 15, 1679), was not meant to be the definitive alphabet, with only two letters, but the basis for a universal language, in which the limitations of natural language are overcome. Leibniz tried hard to make this language utilizable in all domains of human activity, in encoding laws, scientific results, music. I think that the most intriguing aspect, which has been ignored for centuries, was his attempt to visualize events of abstract nature with the help of the two symbols of his alphabet. In a letter to Herzog Rudolph August von Braunschweig (January 2, 1697), Leibniz described his project for a medal depicting the Creation (Imago Creationis). In this letter, he actually introduced digital calculus. Around 1714, he wrote two letters to Nicolas de Remond concerning Chinese philosophy. It is useful to mention these here because of the binary number representation of some of the most intriguing concepts of the Ih-King. Through these letters, we are in the realm of the visual, and in front of pages in which, probably for the first time, translations from ideographic to the sequential, and finally to the digital, were performed. It took almost 300 years before hackers, trying to see if they could use the digital for music notation, discovered that images can be described in a binary system.
This long historic parenthesis is justified by two thoughts. First, it was not the technology that made us aware of images, or even opened access to their digital processing, but intellectual praxis, motivated by its own need for efficiency. Second, visualization is not a matter of illustrating words, concepts, or intuitions. It is the attempt to create tools for generating images related to information and its use. A text on a computer screen is, in fact, an image, a visualization of the language generated not by a human hand in control of a quill, a piece of lead or graphite, a pencil or a pen. The computer does not know language. It translates our alphabet into its own alphabet, and then, after processing, it translates it back into ours. Displayed in those stored images which, if in lead, would constitute the contents of the lower and upper cases of the drawers in each typography shop, this literacy is subject to automation.
When we write, we visualize, making our language visible on paper. When we draw, we make our plans for new artifacts visible. The mediation introduced by the computer use does not affect the condition of language as long as the computer is only the pen, keyboard, or typewriter. But once we encode language rules (such as spelling, case agreement, and so on), once we store our vocabulary and our grammar, and mimic human use of language, what is written is only partially the result of the literacy of the writer. The visualization of text is the starting point towards automatic creation of other texts. It also leads to establishing relations between language and non-language sign systems. Today, we dispose of means for electronically associating images and texts, for cross-referencing images and texts, and for rapidly diagramming texts. We can, and indeed do, print electronic journals, which are refereed on the network. Nothing prevents such journals from inserting images, animation and sounds, or for facilitating on-line reactions to the hypotheses and scientific data presented. That such publications need a shorter time to reach their public goes without saying. The Internet thus became the new medium of publication, and the computer its printing press-a printing press of a totally new condition. Individuals constituting their identity on the Internet have access to resources which until recently were available only to those who owned presses, or gained access to them by virtue of their privileged position in society.
The visual component of computer processing, i.e., the graphics, relies on the same language of zeros and ones through which the entire computer processing takes place. As a result of this common alphabet and grammar (Boolean logic and its new extensions), we can consider language (image translations, or number-image relations such as diagrams, charts, and the like), and also more abstract relations. Creating the means to overcome the limitations of literacy has dominated scientific work. The new means for information processing allow us to replace the routine of phenomenological observation with processing of diverse languages designed especially to help us create new theories of very complex and dynamic phenomena.
The shift to the visual follows the need to change the accent from quantitative evaluations and language inferences based on them, to qualitative evaluations, and images expressing such evaluations at some significant moments of the process in which we are involved. Let us mention some of these processes. In medicine, or in the research for syntheses of new substances, and in space research, words have proven to be not only misleading, but also inefficient in many respects. New visualization techniques, such as those based on molecular resonance, freed the praxis of medicine from the limitations of word descriptions. Patients explain what they feel; physicians try to match such descriptions to typologies of disease based on data resulting from the most recent data. When this process is networked, the most qualified physician can be consulted. When experimental data and theoretic models are joined, the result is visualized and the information exchanged via high-speed broadband digital networks.
Based on similar visualization techniques, we acquire better access to sources of data regarding the past, as well as to information vital for carrying through projects oriented towards the future. Computed tomography, for instance, visualized the internal structure of Egyptian mummies. Three-dimensional images of the whole body were created without violating the casings and wrappings that cover the remnants. The internal body structure was visualized by using a simulation system similar to those utilized in non-intrusive surgery.
The design and production of new materials, space research, and nano- engineering have already benefited from replacing the analytical perspective ingrained in literacy-based methods with visual means for synthesis. It is possible to visualize molecular structures and simulate interactions of molecules in order to see how medicine affects the cells treated, the dynamics of mixing, chemical and biochemical reactions. It is also possible to simulate forces involved in the so-called docking of molecules in virtual space. No literacy-based description can substitute for flight simulators, or for visualization of data from radio astronomy, for large areas of genetics and physics.
Not the last among examples to be given is the still controversial field of artificial intelligence, seduced with emulating behaviors usually associated with human intelligence in action. But it should not surprise anybody that while the dynamics of the civilization of illiteracy requires freedom from literacy, people will continue to preserve values and concepts they are used to, or which are appropriate to specific knowledge areas. Paradoxically, artificial intelligence is, in part, doing exactly this.
When people grow up with images the same way prior generations were subjected to literacy, the relation to images changes. The technology for visualization, although sometimes still based on language models, makes interactivity possible in ways language could not. But it is not only the technology of visualization applied within science and engineering that marks the new development. Visualization, in its various forms and functions, supports the almost instantaneous interaction between us and our various machines, and among people sharing the same natural environment, or separated in space and time. It constitutes an alternative medium for thinking and creativity, as it did all along the history of crafts, design, and engineering. It is also a medium for understanding our environment, and the multitude of changes caused by practical experience involving the life support system. Through visualization, people can experience dimensions of space beyond their direct perception, they can consider the behavior of objects in such spaces, and can also expand the realm of artistic creativity.