clock menu more-arrow no yes mobile
history of gui lead
history of gui lead

Filed under:

40 years of icons: the evolution of the modern computer interface

Diary of a WIMP at middle age

Fifty years ago, the word “computer” had a very different meaning. Prior to World War II, the word referred not to machines, but to people (mostly women in order to save costs) hired as human calculators. During the war, military research spawned mechanical calculators, “computers” such as Colossus and ENIAC; afterward, IBM commercialized equally intimidating, multi-million dollar machines that required their own rooms. They shipped to some of the largest companies in the nation, where they were tended to by specialists. In the popular imagination they were recognized symbols of bureaucracy and dehumanization — the dictatorial Alpha 60 in Jean-Luc Godard’s Alphaville, for example, and the domineering EPICAC in Kurt Vonnegut’s Player Piano — and there was no such thing as a “personal computer.”

But by the late-1960’s that began to change. What happened was not simply the advance of technology (though that helped), but a change in philosophy: a revolution in thinking. If Colossus and ENIAC had been data processors, essentially hypertrophic calculators, researchers and hobbyists of the time saw the potential for something else, something new. Members of the first generation to really grow up with computers, whether home-built or laboratory-bought, a few saw them as more than just oversized arithmetic devices. They saw the dawning of new medium for communication, and machines that didn’t have to reinforce bureaucracy. Instead, they could empower individuals. They could be truly personal computers.

Of course, most people didn’t see such possibility in the green-hued terminals of the 1960’s. But among those who did were Douglas Engelbart, who, believing he’d accomplished all his engineering career goals at 25, devoted himself to enriching mankind through technology; and Alan Kay, who to a less-visionary superior once famously quipped, “The best way to predict the future is to invent it.” With other like-minded souls they paved the way for Jobs and Gates and the world that came after. They did, in fact, invent the future.

The mother of all demos

The mother of all demos

If personal computing has a single birthday, it very well might be December 19, 1968. That day, Douglas Engelbart took the stage at Brooks Hall in San Francisco to demonstrate the system he and colleagues at the Augmentation Research Center had spent nearly ten years building. They called it NLS, for oNLine System, and over the next 90 minutes Engelbart would reveal just how far they’d progressed.

First, he used something called a “mouse” — which would soon displace that other method of graphical input, the light pen. He showed off WYSIWYG editing with embedded hyperlinks; he combined text with graphics. He speculated about the future of ARPANet, then barely on the horizon of technical possibility, which he believed would soon allow him to demonstrate NLS anywhere in the country. After all, he was already videoconferencing with his colleague behind the scenes in Menlo Park, some 30 miles away.

"We were not just building a tool, we were designing an entire system for working with knowledge."

It became known as “the mother of all demos,” credited with influencing a generation of technologists. Equally important as the technical accomplishments, though, was how Engelbart chose to think about his project. He didn’t want to just offload rote calculations to machines; he wanted to help human beings work in smarter, more effective ways. That day he showed how it could be done. “We weren’t interested in ‘automation’ but in ‘augmentation,’” he said later, “We were not just building a tool, we were designing an entire system for working with knowledge.”

That idea rang true to Alan Kay. To him the increasing power of microprocessors meant a coming, unstoppable revolution: computers the size of hardbound books, available to anyone. He embraced that future with a concept he called the Dynabook; today we’d recognize it as a prototype tablet computer. But getting to a future where anyone could use a computer meant radical change. “If the computer is to be truly ‘personal,’” he later wrote, “adult and child users must be able to get it to perform useful activities without resorting to the services of an expert. Simple tasks must be simple, and complex ones must be possible.”

The Dynabook never became a reality, due in part to technological limitations of the time. Engelbart’s NLS likewise never became a viable system. Unlike Kay, Engelbart embraced complexity. His virtuoso demonstration belied the NLS’s steep learning curve; at one point he told his colleagues it would eventually have fifty thousand instructions. By the early 1970’s, the team he’d assembled at Augmentation Research Center had begun to drift away, many frustrated by the visionary’s inability to let his creation out into the world. They set out on their own adventures.

And many, Alan Kay among them, soon arrived at a new home: Xerox’s just-opened Palo Alto Research Center.


Lisa, the Alto, and the Mac

Lisa, the Alto, and the Mac

One of the Promethean myths about Steve Jobs has him and a crew of early Apple employees walking into Xerox PARC in 1979, then walking out with the GUI technology they would use to revolutionize the industry with the Lisa and Macintosh computers. More likely, Apple had already begun work on a graphical interface by the time Jobs finagled the PARC tour. But according to Apple engineer Bill Atkinson, seeing a working mouse-based, windowed GUI helped him solve some problems with the Lisa’s early design.

More remarkably, Xerox had completed its version of a stand-alone, single-user desktop computer more than six years earlier — in April 1973. The inspiration had come from Alan Kay’s Dynabook concept, which sparked a number of semi-skeptical engineers to see if they could build a working prototype in just a few months. The finished machine, which they called the Xerox Alto, was the product of the inquisitive and experimental environment fostered at PARC, well outside the purview of its corporate owners. “If our theories about the utility of cheap, powerful personal computers are correct, we should be able to demonstrate them convincingly on Alto,” lab manager Butler Lampson wrote at the time. “If they are wrong, we can find out why.”

Xerox Alto GUI

The Alto featured WYSIWYG word processing, email, bitmap and vector graphics editors, and the first version of the Smalltalk programming environment. More importantly, the Alto introduced the WIMP interface — windows, icons, menus, pointer — that defines virtually every desktop GUI in use today. If the subsequent ubiquity is a measure of predictive accuracy, the Alto team must have been right in their theories about personal computing.

"Oh my god, this is like a spaceship from the future that has just landed on the front lawn."

Xerox never sold the Alto, though about about 2,000 were made, and a version of it called the PERQ came to market through a company founded by ex-PARC employees. Xerox never fully committed to the experimentation taking place at its research facility; it wasn’t until 1981 that it debuted the Xerox 8010 Information System, which built on ideas pioneered in the Alto. But unlike the Alto, it consisted of many expensive components, including 2 or 3 workstations and associated devices, which drove the typical price above $50,000. Even aimed at business customers it was a tough sell, and the Star, as it was informally known, failed in the marketplace. Like Apple’s Lisa, it was innovative but overpriced, and burdened by sluggish software.

Yet as with Douglas Engelbart’s earlier NLS, the potential of Xerox’s designs spoke to many users. “The Alto cemented some aspects, but the Xerox Star was where it all came together,” says Bruce Damer, a user interface expert who maintains a collection of vintage computers at the Digibarn. “That’s where you had something that looked beautiful and was functional. It was a complete system launched in April 1981. Everyone looked at that like, ‘Oh my god, this is like a spaceship from the future that has just landed on the front lawn.’” Microsoft was one early customer.


"We got stuck in that metaphor for 30 years."
Windows 2.0 GUI

In 1984, however, Apple launched the Macintosh. “I was an early user of the Xerox Star, and it was clear to me the Mac was a big step on top of the Star,” says Ben Shniederman, a professor of computer science and a user-interface expert who helped codify the WIMP interface. He also testified in the copyright lawsuit Apple brought against Microsoft for allegedly ripping off elements of the Mac and Lisa GUI. Apple lost, and a similar suit filed by Xerox against Apple was dismissed.

The 1988 lawsuit underscores just how quickly personal computing converged around a single dominant user interface. Compare, for example, Windows 2.0 (which licensed Apple elements) with Windows 7. Fundamentally, there isn’t much difference. For Shniederman, this makes a certain degree of sense. After all, the desktop metaphor and the WIMP interface work. “I’m a great believer that computers are visual machines,” he says, “We see what we have and then we click on what we like. That paradigm seems quite potent.”

Bruce Damer is less sanguine. “We got stuck in that metaphor for 30 years,” he says. “It’s amazing.” (He also remembers his initial reaction to the Mac: “This is like an Etch-a-Sketch compared to the full network-computing model of the Star. It was like, ‘Well, ok. I don’t know if these guys are going to survive.’”) With rare departures, including context-sensitive menus used on NeXTStep and others, personal computers have remained largely synonymous with, well, virtual desktops.

Yet there are signs of a thaw in the decades-long interface freeze that’s kept us working with GUIs virtually identical to the Alto. Damer points to augmented reality as one possible source of novelty. Yes, the avatar-based interfaces of 1990s virtual reality never really caught on, but who’s to say what might have if true revolutionaries get their hands on, say, Google Glass? The potential of HTML5-based operating systems leaves room for experimentation as well.

And while mobile operating systems have similarly congealed around kiosk-like interfaces, as the line between computers and cellphones becomes increasingly irrelevant, so too does the line between their respective operating systems. Both Apple and Microsoft seem intent on unified interfaces. (Google’s new Pixel touchscreen laptop also suggests a desire to experiment.) About Microsoft’s new desktop OS, Shneiderman is the more cautious, saying, “It just doesn’t quite click for everybody; it may be a little too much of a change for some people. They don’t want a lot of changes.” But Damer says, “We always poo-poo Microsoft and say they can’t do anything, but frankly in Windows 8 they have done something. They’ve moved the bar up.” Both agree it’s a radical departure — and such change is likely inevitable as personal computing as we know it morphs into…something else.

Bruce Damer thinks about that kind of heady change when he walks out into the Digibarn. It’s an actual barn containing computers that go back 35 years. He can sit at them and do things; using them feels familiar and yet completely different. But that’s the subjective experience of personal computing.

“The greatest thing is that this has sort of become a sandbox for the mind,” he says. It’s a medium, not just a calculating machine. “We now have this thing in front of us, it allows us to paint, to write, to listen to music. It mesmerizes us and steals our lives. I think it is the invention of the last 500 years.” And we’re waiting to see what it does next.