David D Thornburg, Associate Editor
Visual
Computing, Part 1
In January 1984 Apple launched the Macintosh-a computer that would accelerate a revolution in computing that had already been gathering momentum for some time. This revolution was not in the computer hardware itself, although this certainly played a role. The revolution was in the way we communicate with our computational technology.
The Macintosh was the first low-cost personal computer to incorporate a primarily pictorial user interface. Rather than having to deal with words and phrases to convey information or desires to the computer, you can select small images (icons) that represent the object with which you want to work. To edit a document with the word processor, for example, you simply place the cursor over the document (shown as a page with a label beneath it) using a pointing device called a mouse. Once the cursor is over the document, two clicks of the mouse is all that's needed to load the document (and the word processor!) into the computer.
The difference between loading a program or text file in this fashion and loading it in by typing commands from the keyboard is subtle. To understand the nature of this difference, and why the visual interface appeals to some users and not to others, we need to explore different ways that people "think."
David D. Thornburg feels comfortable working across the text-picture boundary, and has written a dozen books on computing, including the KoalaPad Book (Addison-Wesley) and 101 Ways to Use a Macintosh (Random House). His most recent book, Beyond Turtle Graphics, describes the nongraphics aspects of the computer language Logo. This book is an introduction to artificial intelligence and will be available soon from Addison-Wesley. Thornburg is currently working on his first novel.
The Two Brains
Several years ago it was in vogue to think of human thinking style as being lateralized to the two hemispheres of the brain. Thinking that takes place in the left hemisphere is linear and analytical. Thinking that takes place in the right hemisphere is parallel, visual, and creative. This model of mental activity became so popular that we found ourselves referring to artists as "right-brained" people and to analytical thinkers as being "left-brained."
In fact, we all have the ability to think with both sides of our brain-to be both analytical and to be creative-to think linearly and in parallel. It is true that many of us spend more time in one mode of thought than the other. It is also true that our society seems to develop and encourage our analytical linear thinking at the expense of our creative mind. But it is both unfair and inaccurate to suggest that any individual is purely "left-brained" or "right-brained."
When interactive computer systems were first developed for mass production, it was decided that people should communicate with these machines through the typewriter keyboard and that the computer should respond primarily through a text-based display. Interestingly, the dedicated videogame computers that were being developed at the same time chose to use non-keyboard devices such as joysticks and game paddles instead of the keyboard, and to produce colorful graphic images rather than text displays.
Anyone who remembers the fads of the late 1970s will recall that videogame consoles outsold personal computers many times over. This extremely high ratio of game to computer sales was not based on price alone. The fact was that purchasers of game machines knew exactly what to do with them as soon as they were plugged in. The videogame was extremely easy to use-intuitively easy, perhaps.
Nothing Automatic
Personal computers, on the other hand, seemed designed for the linear analytical mode of thought. Nothing happened automatically-the keyboard had to be used for everything, including loading a program in the first place.
For example, suppose we look at the process of starting a game with the Atari 2600 Video Computer System and with the Commodore 64 computer. In the case of the Atari game machine, one needs only to insert the game cartridge and switch on the power. While this same process applies to the Commodore 64 with cartridge games, the story is quite different when the program is provided on disk. You then must enter:
LOAD "*",8
RUN
RUN
to get the game into the computer.
This difference in the user interface has nothing to do with technology differences between the two machines. The fact that the Commodore 64 has more RAM, or a disk drive, or can be used with thousands of different programs, is not the issue. In fact, most personal computer users expect to have to type strings of textual information into their computer to make it do something useful.
Mainly The Keyboard
For those of us who have used computers for a long time, none of this represents any hardship-it is simply "how things are done." Of course we are happy when the interface is simplified. Almost all Apple II owners, for example, equip their computers with "autostart ROMs" that will let a program boot from the disk automatically when the computer is turned on.
But still, the keyboard has maintained its role as the primary communication tool, even when the information to be communicated is nontextual.
This restriction in interface technology has kept many people from using computers. A major typing tutor program was promoted with the slogan "If you can't type you can't compute." For the vast majority of potential computer users in the world, this amounts to disfranchisement.
Fortunately, the slogan was wrong. Typing has nothing whatsoever to do with computing. All that is needed is a variety of communication tools across the man-machine interface to make computers accessible to any who would want to use them.
What made the Macintosh different was that it provided another type of interface-one that was primarily visual rather than textual.
A Step Back?
Of course, there are critics who would argue that the visual interface is a giant step backwards-that we gave up iconographic writing many years ago in favor of building words from an alphabet of letters. These same people might argue that those cultures whose language is still recorded in iconographic form are burdened with a cumbersome writing system that has hampered their development.
The visual computer interface has nothing to do with how we write. I am not arguing that we should do away with our alphabet or with words or with writing. I am not suggesting that we should use nothing but pictures in our next letter to Aunt Elsinore. What I am suggesting is that, when we are referring to the operations to be performed by a computer, it is only a matter of convention that we refer to these operations in written form. The convention to build programming languages from a vocabulary of English words was completely arbitrary. It was done, in part, because computer systems were provided with keyboards.
In fact, the first computer programs devised by Lady Lovelace. for Babbage's Analytical Engine were patterns of holes in punched cards.
Any Symbols Will Do
Because most of us don't think of programming as a nontextual activity, it is hard for us to realize that one can communicate information to a computer in many different ways. A computer is, after all, just a symbol manipulation tool. The use of letters and numbers as symbols is arbitrary-it could work as easily with any other symbols we may devise.
The reason for exploring this topic at all is simple: Without being consciously aware of it, we have been overtaken by symbolic nontextual programming languages and have embraced them wholeheartedly. We have, in fact, become a nation of programmers without knowing it.
Anyone who builds a new level of Lode Runner, designs a new game with Pinball Construction Set, creates a new spreadsheet with Multiplan, or who works with any of the myriad construction set systems that represent one of the best-selling classes of software that has ever existed, is, in fact, creating computer programs with a minimum of typing. In fact, many of these programs are created by people with no typing whatsoever.
So, it is mildly amusing to hear many of these same construction set users suggest that programming is a "typing" activity.
Free Choice
Again, it is not typing that is the issue. I will argue that the nature of our communication medium determines the nature of the ideas we communicate. Some of us express ourselves quite well in linear textual form, and others of us are more comfortable with pictures and diagrams. There is nothing wrong with either approach to expression. What is important is that our technology has advanced to the point where people are free to choose their communication form, and even to switch back and forth between the two if they so desire. Any choice between the two has to be based on personal preference, not on the assumption that there is one "right" way to communicate.
Judging from the popularity of the visual interface (there is even a version of a Macintoshlike graphics program available for the PCjr!), the development of visual interfaces is opening up computer access to many thousands of people who would never have otherwise been interested in using this technology.
But, just because this new communication mode has been made available to the general public, this is no reason to think that we already know all of its consequences. As I gaze into my cloudy crystal ball, I see a future in which much of our programming will be done without the labor of typing-where we will write programs by constructing flow charts that indicate graphically what it is we want the computer to do for us.
These visual programming environments will let us express a goal without also requiring that we tell the computer how to achieve that goal.
Next month we will explore a visual programming environment in depth and compare it to text-based programming. Our visual programming language will be the database language HELIX, developed by Odesta for the Macintosh.