In the mid-1990s, as first the Internet and then the World Wide Web swung into public view, talk of revolution filled the air. Politics, economics, the nature of the self—all seemed to teeter on the edge of transformation. The Internet was about to “flatten organizations, globalize society, decentralize control, and help harmonize people,” as MIT’s Nicholas Negroponte put it. The stodgy men in gray flannel suits who had so confidently roamed the corridors of industry would shortly disappear, and so too would the chains of command on which their authority depended. In their place, wrote Negroponte and dozens of others, the Internet would bring about the rise of a new “digital generation”—playful, self-sufficient, psychologically whole—and it would see that generation gather, like the Net itself, into collaborative networks of independent peers. States too would melt away, their citizens lured back from archaic part-based politics to the “natural” agora of the digitized marketplace. Even the individual self, so long trapped in the human body, would finally be free to step outside its fleshy confines, explore its authentic interests, and find others with whom it might achieve communion. Ubiquitous networked computing had arrived, and in its shiny array of interlinked devices, pundits, scholars, and investors alike saw the image of an ideal society: decentralized, egalitarian, harmonious, and free.
But how did this happen? Only thirty years earlier, computers had been the tools and emblems of the same unfeeling industrial-era social machine whose collapse they now seemed ready to bring about. In the winter of 1964, for instance, students marching for free speech at the University of California at Berkeley feared that America’s political leaders were treating them as if they were bits of abstract data. One after another, they took up blank computer cards,punched them through with new patterns of holes—“FSM” and “STRIKE”—and hung them around their necks. One student even pinned a sign to his chest that parroted the cards user instructions, “I am a UC student. Please do not fold, bend, spindle or mutilate me.” For the marchers of the Free Speech Movement, as for many other Americans throughout the 1960s, computers loomed as technologies of dehumanization, of centralized bureaucracy and the rationalization of social life, and, ultimately, of the Vietnam War. Yet, in the 1990s, the same machines that had served as the defining devices of cold war technocracy emerged as the symbols of its transformation. Two decades after the end of the Vietnam War and the fading of the American counterculture, computers somehow seemed poised to bring to life the countercultural dream of empowered individualism, collaborative community,and spiritual communion. How did the cultural meaning of information technology shift so drastically.
As a number of journalists and historians have suggested, part of the answer is technological. By the 1990s, the room-sized, stand-alone calculating machines of the cold war era had largely disappeared. So too had the armored rooms in which they were housed and the army of technicians that supported them. Now Americans had taken up microcomputers, some the size of notebooks, all of them available to the individual user, regardless of his or her institutional standing. These new machines could perform a range of tasks that far exceeded even the complex calculations for which digital computers had first been built. They became communication devices and were used to prepare novels and spreadsheets, pictures and graphs. Linked over telephone wires and fiber-optic cables,they allowed their users to send messages to one another, to download reams of information from libraries around the world, and to publish their own thoughts on the World Wide Web. In all of these ways,changes in computer technology expanded the range of uses to which computers could be put and the types of social relations they were able to facilitate.
As dramatic as they were, however, these changes alone do not account for the particular utopian visions to which computers became attached. The fact that a computer can be put on a desktop, for instance, and that it cant be used by an individual, does not make it a “personal” technology. Nor does the fact that individuals can come together by means of computer networks necessarily require that their gatherings become “virtual communities.” On the contrary, as Shoshanna Zuboff has pointed out, in the office, desktop computers and computer networks can become powerful tools for integrating the individual ever more closely into the corporation. At home, those same machines not only allow schoolchildren to download citations from the public library, they also turn the living room into a digital shopping mall. For retailers, the computer in the home becomes an opportunity to harvest all sorts of information about potential customers. For all the utopian claims surrounding the emergence of the Internet, there is nothing about a computer or a computer network that necessarily requires that it level organizational structures, render the individual more psychologically whole, or drive the establishment of intimate, though geographically distributed, communities?
How was it, then, that computers and computer networks became linked to visions of peer-to-peer ad-hocracy, a leveled marketplace, and a more authentic self? Where did these visions come from? And who enlisted computing machines to represent them?
If that hanging question doesn’t make you want to read the book, I don’t know what will.
I just bought a copy of this book for a coworker. I used to frequently give out copies of Richard Bach’s Illusions to friends_, _ but this is a little heavier reading.