Judy Malloy donations to the MAL’s early e-literature collection

malloyDonations

It’s an honor indeed to announce that Judy Malloy, a true pioneer of hypertext and electronic literature broadly, has donated a set of floppies as well as documentation to the Media Archaeology Lab. To give you a sense of her contributions to the field, I’ve excerpted the following from her longer, more fascinating biography, on her website:

Her work as a pioneer on the Internet and in electronic literature began after cataloguing, designing and programming information systems in the late mid and late sixties, at the time when library information systems designers were among the first to utilize computers to access information, and futurists were envisioning their use in the humanities. She began creatively using narrative information in artists books in the late seventies and early eighties and then, with a vision of nonsequential literature, wrote and programmed Uncle Roger — one of the first (if not the first) works of hypertext literature — on Art Com Electronic Network in the Well. (1986-1988) In the following years, she created a series of innovative literary works that run on computer platforms and were published by Eastgate and on the Internet. In 1993, she was invited to Xerox PARC where she worked in CSL (Computer Science Laboratory) as the first artist in their artist-in-residence program. Judy Malloy created one of the first arts websites, Making Art Online, (1993-1994) originally commissioned in collaboration with the ANIMA site in Vancouver (CSIR/Western Front) and currently hosted on the website of the Walker Art Center. l0ve0ne, written and coded in 1994, was the first selection in the Eastgate Web Workshop. A complete collection of her papers and software is archived in the Judy Malloy Papers at the David M. Rubenstein Rare Book & Manuscript Library at Duke University.

Below is Malloy’s packing list of the works she has generously donated to the lab – I will soon test all the floppies and will add notes here as to their functionality. Enjoy and, as always, the MAL welcomes visiting researchers!

*

Disk labeled “molasses”
Malloy’s 1988 Hypercard Stack Molasses.

Judy Malloy, Molasses, Berkeley, CA, 1988. (for MacIntosh Computers HyperCard – produced at the Whole Earth Review under sponsorship of Apple Computers) – Exhibited in the traveling exhibition Art Com Software at Tisch School of the Arts, New York University, NYC, NY, 1988 and other places.

Judy Malloy, its name was Penelope, 1990.
This is probably a PC disk and an interim version between the 1989 exhibition version and the more formally packaged 1991 version, which was distributed by Art Com software.

Judy Malloy, its name was Penelope. Eastgate Systems, 1993
This was Eastgate’s first version, published on disk for both Macs and PCs.  The disk is signed and actually says 1992.  This copy was my Mother’s copy which is why there is a label that says Barbara Powers in it. Note that the pages in these early editions stuck together

Judy Malloy, Wasting Time, Penelope, Uncle Roger
It looks as if all three of these works are on the disk.  It was probably a disk I used to send around the works for exhibition consideration and is probably a PC disk.  Wasting Time was published as follows: Judy Malloy, “Wasting Time”, A Narrative Data Structure”, After the Book (Perforations 3) Summer, 1992.

Judy Malloy and Cathy Marshall, Forward Anywhere  Eastgate Systems, 1996.
This is a disk version.  It was published in both Mac and PC versions, but this is probably a PC version. A second version was published with a CD

James Johnson, Second Thoughts, 1989.
Distributed by Art Com Software. He sent me a couple of copies, and I gave the other one to my archives at Duke.

Documentation  Folders

Bad Information Base #1
This is the first work of computer-mediated text that I created.  Note that it is not the Bad Information Base #2 which was created ion ACEN later in 1986. Bad Information Base #1 is documented in Judy Malloy, “OK Research/OK Genetic Engineering/Bad Information, Information Art Defines Technology”, Leonardo 21(4): 371 – 375, 1988  It is explained in the May 1986 documentation in the folder. Basically, I made the database and then sent out cards to the mail art network.  When the cards were returned, I ran a search and then sent a printout to the requester. In addition to a documentation sheet, the folder includes a blank search card, an envelope label (it was pasted on to the envelopes) a second edition envelope, a blank letterhead sheet,  and a copy of the accordion fold list of keywords that was sent along with the card. I don’t have a disk of this work available, but Duke has printouts and a notebook with copies of the completed search cards.

Uncle Roger
A documentation sheet for A Party in Woodside, 1987

This was probably included with the 1987 version of A Party in Woodside which was self published and distributed by Art Com

An instruction booklet that was included in the packaging to the Apple II version of Uncle Roger which contained all three files. This version was probably published (self published by Bad Information) in 1988 and was distributed by Art Com.

Its name was Penelope
Documentation for the exhibition version.

A flyer advertising the version for the self-published (Narrabase Press) version  that was available from Art Com.

Unassembled packing for the Narrabase Press version. The 3 pieces inside the watercolor paper folder are a cover, a back cover page and instructions. These pieces were pasted onto folder watercolor paper and a pocket that I constructed inside the folded watercolor paper contained a disk. An unassembled disk cover is also included.  The whole when assembled was enclosed in a heavy clear plastic sleeve.

Molasses
This folder contains a few Xeroxes or printouts of screens from Molasses, one of which has instructions for reading the work.

Wasting Time
A documentation sheet for Wasting Time.

Advertisements

From the Philosophy of the Open to the Ideology of the User-Friendly

Below is an excerpt from chapter two, “From the Philosophy of the Open to the Ideology of the User-Friendly,” from my book Reading Writing Interfaces: From the Digital to the Bookbound (University of Minnesota Press 2014). It is also the basis of the talk I gave at MLA in January 2013 and the full version of the talk I gave at Counterpath Press February 2013. As always, I welcome your comments!

*

“Knowledge is power: information is the fabric of knowledge; the controller of information wields power.”
–“Some Laws of Personal Computing,”
Byte 1979 (Lewis 191)

“If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual…Any barrier that exists between the user and some part of the system will eventually be a barrier to creative expression. Any part of the system that cannot be changed or that is not sufficiently general is a likely source of impediment.”
–“Design Principles Behind Smalltalk,” Byte 1981(Ingalls 286)

My talk today is concerned with a decade in which we can track the shift from seeing a user-friendly computer as a tool that, through a graphical user interface (GUI), encourages understanding, tinkering, and creativity to seeing a user-friendly computer that uses a GUI to create an efficient work-station for productivity and task-management and the effect of this shift particularly on digital literary production. The turn from computer systems based on the command-line interface to those based on “direct manipulation” interfaces that are iconic or graphical was driven by rhetoric that insisted the GUI, particularly that pioneered by the Apple Macintosh design team, was not just different from the command-line interface but it was naturally better, easier, friendlier. The Macintosh was, as Jean-Louis Gassée (who headed up its development after Steve Jobs’s departure in 1985) writes without any hint of irony, “the third apple,” after the first apple in the Old Testament and the second apple that was Isaac Newton’s, “the one that widens the paths of knowledge leading toward the future.”

Despite studies released since 1985 that clearly demonstrate GUIs are not necessarily better than command-line interfaces in terms of how easy they are to learn and to use, Apple – particularly under Jobs’ leadership – successfully created such a convincing aura of inevitable superiority around the Macintosh GUI that to this day the same “user-friendly” philosophy, paired with the no longer noticed closed architecture, fuels consumers’ religious zeal for Apple products. I have been an avid consumer of Apple products since I owned my first Macintosh Powerbook in 1995; but what concerns me is that ‘user-friendly’ now takes the shape of keeping users steadfastly unaware and uninformed about how their computers, their reading/writing interfaces, work let alone how they shape and determine their access to knowledge and their ability to produce knowledge. As Wendy Chun points out, the user-friendly system is one in which users are, on the one hand, given the ability to “map, to zoom in and out, to manipulate, and to act” but the result is a “seemingly sovereign individual” who is mostly a devoted consumer of ready-made software and ready-made information whose framing and underlying mechanisms we are not privy to.

However, it’s not necessarily the GUI per se that is responsible for the creation of Chun’s “seemingly sovereign individual” but rather a particular philosophy of computing and design underlying a model of the GUI that has become the standard for nearly all interface design. The earliest example of a GUI-like interface whose philosophy is fundamentally different from that of the Macintosh is Douglas Engelbart’s NLS or “oN-Line System” which he began work on in 1962 and famously demonstrated in 1968. While his “interactive, multi-console computer-display system” with keyboard, screen, mouse, and something he called a chord handset is commonly cited as the originator of the GUI, Engelbart wasn’t so much interested in creating a user-friendly machine as he was invested in “augmenting human intellect”. As he first put it in 1962, this augmentation meant “increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems”. The NLS was not about providing users with ready-made software and tools from which they choose or consume but rather it was about bootstrapping, or “the creation of tools for expert computer users” and providing the means for users to create better tools, or tools better suited for their own individual needs. We can see this emphasis on tool-building and customization that comes out of an augmented intellect in Engelbart’s provision of “view-control” (which allows users to determine how much text they see on the screen as well as the form of that view) and “chains of views” (which allows the user to link related files) in his document editing program.

Underlining the fact that the history of computing is resolutely structured by stops, starts, and ruptures rather than a series of linear firsts, in the year before Engelbart gave his “mother of all demos,” Seymour Papert and Wally Feurzeig began work on a learning-oriented programming language they called ‘Logo’ that was explicitly for children but implicitly for learners of all ages. Throughout the 1970s Papert and his team at MIT conducted research with children in nearby schools as they tried to create a version of Logo that was defined by “modularity, extensibility, interactivity, and flexibility”. At this time, the Apple II was the most popular home computer throughout the late 1970s until the mid-1980s and, given its open architecture, in 1977 Logo licensed a public version for Apple II computers as well as for the less popular Texas Instruments TI 99/4. In 1980, Papert published the decidedly influential Mindstorms: Children, Computers, and Powerful Ideas in which he makes claims about the power of computers that are startling for a contemporary readership steeped in an utterly different notion of what accessible or user-friendly computing might mean. Describing his vision of “computer-aided instruction” in which “the child programs the computer” rather than one in which the child adapts to the computer or even is taught by the computer, Papert asserts that they thereby “embark on an exploration about how they themselves think…Thinking about thinking turns the child into an epistemologist, an experience not even shared by most adults” (19). And two years later, in a February 1982 issue of Byte magazine, Logo is advertised as a general-purpose tool for thinking with a degree of intellectuality rare for any advertisement: “Logo has often been described as a language for children. It is so, but in the same sense that English is a language for children, a sense that does not preclude its being ALSO a language for poets, scientists, and philosophers”. Moreoever, for Papert thinking about thinking by way of programming happens largely when the user encounters bugs in the system and has to then identify where the bug is to then remove it: “One does not expect anything to work at the first try. One does not judge by standards like ‘right – you get a good grade’ and ‘wrong – you get a bad grade.’ Rather one asks the question: ‘How can I fix it?’ and to fix it one has first to understand what happened in its own terms.” (101) Learning through doing, tinkering, experimentation, trial-and-error is, then, how one comes to have a genuine computer literacy.

In the year after Papert et al began work on Logo and the same year as Engelbart’s NLS demo, Alan Kay also commenced work on the never-realized Dynabook, produced as an “interim Dynabook” in 1972 in the form of the GUI-based Xerox Alto which could also run the Smalltalk language. Kay thereby introduced the notion of “personal dynamic media” for “children of all ages” which “could have the power to handle virtually all of its owner’s information-related needs”. Kay, then, along with Engelbart and Papert, understood very clearly the need for computing to move from the specialized environment of the research lab and into people’s homes by way of a philosophy of the user-friendly oriented toward the flexible production (rather than rigid consumption) of knowledge. It was a realization eventually shared by the broader computing community for, by 1976, Byte magazine was publishing editorials such as “Homebrewery vs the Software Priesthood” declaring that “the movement towards personalized and individualized computing is an important threat to the aura of mystery that has surrounded the computer for its entire history” (90). And more:

The movement of computers into people’s homes makes it important for us personal systems users to focus our efforts toward having computers do what we want them to do rather than what someone else has blessed for us…When computers move into peoples’ homes, it would be most unfortunate if they were merely black boxes whose internal workings remained the exclusive province of the priests…Now it is not necessary that everybody be a programmer, but the potential should be there…(90).

image1from “Homebrewery vs the Software Priesthood,” Byte magazine October 1976

It was precisely the potential for programming or simply novice as well as expert use via an open, extensible, and flexible architecture that Engelbart, Papert and Kay sought to build into their models of the personal computer to ensure that home computers did not become “merely black boxes whose internal workings remained the exclusive province of the priests.” By contrast, as Kay later exhorted his readers in 1977, “imagine having your own self-contained knowledge manipulator in a portable package the size and shape of an ordinary notebook”. Designed to have a keyboard, an NLS-inspired “chord” keyboard, mouse, display, and windows, the Dynabook would allow users to realize Engelbart’s dream of a computing device that gave them the ability to create their own ways to view and manipulate information. Rather than the over-determined post-Macintosh GUI computer which has been designed to pre-empt every user’s possible need with the creation of an over-abundance of ready-made tools such that “those who wish to do something different will have to put in considerable effort,” Kay wanted a machine that was “designed in a way that any owner could mold and channel its power to his own needs…a metamedium, whose content would be a wide range of already-existing and not-yet-invented media” (403). More, Kay understood from reading Marshall McLuhan, that the design of this new metamedium was no small matter for the very use of a medium changes an individual’s, a culture’s, thought patterns. Clearly, he wanted thought patterns to move toward a literacy that involved reading and writing in the new medium instead of the unthinking consumption of ready-made tools, for, crucially, “the ability to ‘read’ a medium means you can access materials and tools created by others. The ability to ‘write’ in a medium means you can generate materials and tools for others. You must have both to be literate”.

While Kay envisioned the GUI-like interface of the Dynabook would play a crucial role in realizing this “metamedium,” the Smalltalk software driving this interface was equally necessary. Its goal was “to provide computer support for the creative spirit in everyone” (286). Not surprisingly, Kay and his collaborators began working intensely with children after the creation of Smalltalk-71. Influenced by developmental psychologist Jean Piaget as well as Kay’s own observation of Papert and his colleagues’ use of Logo in 1968, Smalltalk relied heavily on graphics and animation through one particular incarnation of the GUI: the Windows, Icons, Menus, and Pointers (or WIMP) interface. Kay writes that in the course of observing Papert using Logo in schools, he realized that these were children “doing real programming…”:

  …this encounter finally hit me with what the destiny of personal computing really was going to be. Not a personal dynamic vehicle, as in Engelbart’s metaphor opposed to the IBM “railroads”, but something much more profound: a personal dynamic medium. With a vehicle one could wait until high school and give “drivers ed”, but if it was a medium, it had to extend into the world of childhood (“The Early History” 81).

As long as the emphasis in computing was on learning – especially through making and doing – the target demographic was going to be children and as long as children could use the system, then so too could any adult provided they understood the underlying structure, the how and the why, of the programming language. As Kay astutely points out, “…we make not just to have, but to know. But the having can happen without most of the knowing taking place”. And, as he goes on to point out, designing the Smalltalk user interface shifted the purpose of interface design from “access to functionality” to an “environment in which users learn by doing” (84). And so Smalltalk designers didn’t so much completely reject the notion of readymade software so much as they sought to provide the user with a set of software building blocks from which the user could then combine and/or edit to create their own customized system. Or, as Trygve Reenskaug (a visiting Norwegian computer scientist with the Smalltalk group at Xerox PARC in the late 1970s) put it:

 …the new user of a Smalltalk system is likely to begin by using its ready-made  application systems for writing and illustrating documents, for designing aircraft wings, for doing homework, for searching through old court decisions, for composing music, or whatever. After a while, he may become curious as to how his system works. He should then be able to “open up” the application object on the screen to see its component parts and to find out how they work together (166).

With an emphasis on learning and building through an open architecture, Adele Goldberg – co-developer of Smalltalk along with Alan Kay and author of most of the Smalltalk documentation – describes the Smalltalk programming environment in this special issue of Byte as one that set out to defy that of the conventional software development environment as illustrated in Figure 1 below:

image2

Image by Adele Goldberg contrasting the conventional philosophy of software driven by “wizards” in Figure 1 versus that provided by Smalltalk for the benefit of the programmer/user in Figure 2.

The Taj Mahal in Figure 1 “represents a complete programming environment, which includes the tools for developing programs as well as the language in which the programs are written. The users must walk whatever bridge the programmer builds” (Goldberg 18). Figure 2, by contrast, represents a Taj Mahal in which the “software priest” is transformed into one who merely provides the initial shape of the environment which programmers can then modify by building “application kits” or “subsets of the system whose parts can be used by a nonprogrammer to build a customized version of the application” (18). The user or non-programmer, then, is an active builder in dialogue with the programmer instead of a passive consumer of a pre-determined (and perhaps even over-determined) environment.

At roughly the same time as Kay began work on Smalltalk in the early 1970s, he was also involved with the team of designers working on the NLS-inspired Xerox Alto which was developed in 1973 as, again, an “interim Dynabook” with a three-button mouse, a GUI which worked in conjunction with the desktop metaphor, and ran Smalltalk. While only several thousand non-commercially available Altos were manufactured, it was – as team members Chuck Thacker and Butler Lampson believe – probably the first computer explicitly called a “personal computer” because of its GUI and its network capabilities. By 1981, Xerox had designed and produced a commercially available version of the Alto, called the 8010 Star Information System, which was sold along with Smalltalk-based software. But as Jeff Johnson et al point out, the most important connection between Smalltalk and the Xerox Star lay in the fact that Smalltalk could clearly illustrate the compelling appeal of a graphical display that the user accessed via mouse, overlapping windows, and icons (22).

image3

Screenshot of Xerox Star from Jeff Johnson et al’s “The Xerox Star: A Retrospective.”

However, the significance of the Star is partly the indisputable impact it had on the GUI design of first the Apple Lisa and then the Macintosh; its significance is also in the way in which it was clearly labeled a work-station for “business professionals who handle information” rather than a metamedium or a tool for creating or even thinking about thinking. And in fact, the Star’s interface – which was the first commercially available computer born out of work by Engelbart, Papert and Kay that attempted to satisfy both novice and expert users in providing an open, extensible, flexible environment and that also happened to be graphical – was conflicted at its core. While in some ways the Star was philosophically very much in line with the open thinking of Engelbart, Papert, and Kay, in other ways its philosophy as much as its GUI directly paved the way to the closed architecture and consumption-based design of the Macintosh. Take for example the overall design principles of the Star which were aimed at making the system seem “familiar and friendly.”

Easy                             Hard

concrete                     abstract
visible                         invisible
copying                      creating
choosing                    filling in
recognizing               generating
editing                        programming
interactive                 batch

Star designers also avowed to avoid the characteristics they list on the right while adhering to a schema that exemplifies the characteristics listed on the left. While there’s little doubt that ease-of-use was of central importance to Engelbart, Papert and Kay – often brought about through interactivity and making computer operations and commands visible – the avoidance of “creating,” “generating,” or “programming” couldn’t be further from their vision of the future of computing. At the same time as the Star forecloses on creating, generating, and programming through its highly restrictive set of commands in the name of simplicity, it also wants to promote users’ understanding of the system as a whole – although, again, we can see this particular incarnation of the GUI represents the beginning of a shift toward only a superficial understanding of the system. Without a fully open, flexible, and extensible architecture, the home computer becomes less a tool for learning and creativity and more a tool for simply “handling information.”

By contrast, as I’ll now talk about, the Apple Macintosh was clearly designed for consumers, not creators. It was marketed as a democratizing machine when in fact it was democratizing only insofar as it marked a profound shift in personal computing away from the sort of inside-out know-how one needed to create on an Apple II to the kind of perfunctory know-how one needed to navigate the surface of the Macintosh – one that amounts to the kind of knowledge needed to click this or that button. The Macintosh was democratic only in the manner any kitchen appliance is democratic. That said, Apple’s redefinition of the overall philosophy of personal computing exemplifies just one of many reversals that abound in this ten year period from the mid-1970s to the mid-1980s. In relation to the crucial change that took place in the mid-1980s from open, flexible, and extensible computing systems for creativity to ones that were closed, transparent, and task-oriented, the way in which the Apple Macintosh was framed from the time of its release in January 1984 represented a near complete purging of the philosophy promoted by Engelbart, Kay, and Papert. This purging of the recent past took place under the guise of Apple’s version of the user-friendly that, among other things, pitted itself against the supposedly “cryptic,” arcane,” “phosphorescent heap” that was the command-line interface as well as, it was implied, any earlier incarnation of the GUI.

However, it’s important to note that, while the Macintosh philosophy purged much of what had come before, it did in fact emerge from the momentum gathering in other parts of the computing industry which were particularly concerned to define standards for the computer interface. Up to this point, personal computers were remarkably different from each other. Commodore 64 computers, for example, came with both a ‘Commodore’ key that gave the user access to an alternate character set as well as four programmable function keys that, with the shift button, could each be programmed for two different functions. By contrast, Apple II computers came with two programmable function keys and Apple III, IIc and IIe computers came with open-Apple and closed-Apple keys that provided the user with shortcuts to applications such as cut-and-paste or copy (in the same way that the contemporary ‘command’ key functions).

No doubt in response to the difficulties this variability posed to expanding the customer base for personal computers, Byte magazine ran a two-part series in October and November 1982 dedicated to the issue of industry standards by way of an introduction to a proposed uniform interface called the “Human Applications Standard Computer Interface” (or HASCI). Asserting the importance of turning the computer into a “consumer product,” author Chris Rutkowski declares that every computer ought to have a “standard, easy-to-use format” that “approaches one of transparency. The user is able to apply intellect directly to the task; the tool itself seems to disappear” (291, 299-300). Of course, a computer that is easy-to-use is entirely desirable; however, at this point ease-of-use is framed in terms of the disappearance of the tool being used in the name of ‘transparency ‘ – which now means usersfwhi can efficiently accomplish their tasks with the help of a glossy surface that shields them from the depths of the computer instead of the earlier notion of ‘transparency’ which referred to a usesr’s ability to open up the hood of the computer to understand directly its inner workings.

Thus, no doubt in a bid to finally produce a computer that realized these ideas and appealed to consumers who are “drivers, not repairmen,” Apple unveiled the Lisa in June 1983 for nearly $10,000 (that’s $23,000 in 2012 dollars) as a cheaper and more user-friendly version of the Xerox Alto/Star which sold for $16,000 in 1981 (which is about $40,000). At least partly inspired by Larry Tesler’s Xerox PARC 1979 demo of the Star to Steve Jobs, the Lisa used a one-button mouse, overlapping windows, pop-up menus, a clipboard, and a trashcan. As Tesler was adamant to point out in a 1985 article on the “Legacy of the Lisa,” it was “the first product to let you drag [icons] with the mouse, open them by double-clicking, and watch them zoom into overlapping windows” (17). The Lisa, then, moved that much closer to the realization of the dream of transparency with, for example, its mode of double-clicking that attempted to have users develop the quick, physical action of double-clicking that bypasses the intellect through physical habit; more, its staggering two 2048K worth of software and three expansion slots also firmly moved it in the direction of a readymade, closed consumer product and definitively away from the Apple II, which, when it was first released in 1977, came with 16K bytes of code and, again, eight expansion slots.

Expansion slots symbolize the direction that computing was to take from the moment the Lisa was released, followed by the release of the Macintosh in January 1984, to the present day. Jeff Raskin, who originally began the Macintosh project in 1979, and Steve Jobs both believed that hardware expandability was one of the primary obstacles in the way of personal computing having a broader consumer appeal. In short, expansion slots made standardization impossible (partly because software writers needed consistent underlying hardware to produce widely functioning products) whereas what Raskin and Jobs both sought was a system which was an “identical, easy-to-use, low-cost appliance computer.” At this point, customization is no longer in the service of building, creating or learning – it is, instead, for using the computer as one would any home appliance and ideally this customization is only possible through software that the user drops into the computer via disk just as they would a piece of bread into a toaster. Predictably, then, the original plan for the Macintosh had it tightly sealed so that the user was only free to use the peripherals on the outside of the machine. While team-member Burrell Smith managed to convince Jobs to allow him to add in slots for users to expand the machine’s RAM, Macintosh owners were still “sternly informed that only authorized dealers should attempt to open the case. Those flouting this ban were threatened with a potentially lethal electric shock”.

That Apple could successfully gloss over the aggressively closed architecture of the Macintosh while at the same time market it as a democratic computer “for the people” marks just one more remarkable reversal from this period in the history of computing. As is clear in the advertisement below that came out in Newsweek Magazine during the 1984 election cycle, the Macintosh computer was routinely touted as embodying the principle of democracy. While it was certainly more affordable than the Lisa (in that it sold for the substantially lower price of $2495), its closed architecture and lack of flexibility could still easily allow one to claim it represented a decidedly undemocratic turn in personal computing.

Thus, 1984 became the year that Apple’s philosophy of the computer-as-appliance, encased in an aesthetically pleasing exterior, flowered into an ideology. We can partly see how their ideology of the user-friendly came to fruition through their marketing campaign which included a series of magazine ads such as the one below—

image9

Advertisement for the Apple Macintosh from the November/December 1984 issue of Newsweek Magazine.

—along with one of the most well-known TV commercials of the late twentieth century.In the case of the latter, Apple takes full advantage of the powerful resonance still carried by George Orwell’s dystopian, post-World War II novel 1984 by reassuring us in the final lines of the commercial that aired on 22 January 1984 that “On January 24th Apple Computer will introduce Macintosh. And you’ll see why 1984 won’t be like ‘1984.’”

Apple positions Macintosh, then, as a tool for and of democracy while also pitting the Apple philosophy against a (non-existent) ‘other’ (perhaps communist, perhaps IBM or ‘Big Blue’) who is attempting to oppress us with an ideology of bland sameness. Apple’s ideology, then, “saves us” from a vague and fictional, but no less threatening, Orwellian, and nightmarish ideology. As lines of robot-like people, all dressed in identical grey, shapeless clothing march into the opening scene of the commercial, a narrator of this pre-Macintosh nightmare appears on a screen before them in something that appears to be a propaganda film. We hear, spoken fervently, “Today we celebrate the first glorious anniversary of the Information Purification Directives.” And, as Apple’s hammer-thrower then enters the scene, wearing bright red shorts and pursued by soldiers, the narrator of the propaganda film continues:

We have created for the first time in all history a garden of pure ideology, where each worker may bloom, secure from the pests of any contradictory true thoughts. Our Unification of Thoughts is more powerful a weapon than any fleet or army on earth. We are one people, with one will, one resolve, one cause. Our enemies shall talk themselves to death and we will bury them with their own confusion.

And just before the hammer is thrown at the film-screen, causing a bright explosion that stuns the grey-clad viewers, the narrator finally declares, “We shall prevail!” But who exactly is the hammer-thrower-as-underdog fighting against? Who shall prevail – Apple or Big Brother? Who is warring against whom in this scenario and why? In the end, all that matters is that, at this moment, just two days before the official release of the Macintosh, Apple has created a powerful narrative of its unquestionable, even natural superiority over other models of computing that continues well into the twenty-first century. It is an ideology that of course masks itself as such and that is born out of the creation of and then opposition to a fictional, oppressive ideology we users/consumers need to be saved from. In this context, the fervor with which even Macintosh team-members believed in the rightness and goodness of their project is somewhat less surprising as they were quoted in Esquire earnestly declaring, “Very few of us were even thirty years old…We all felt as though we had missed the civil rights movement. We had missed Vietnam. What we had was the Macintosh”.

Even non-fiction accounts of the Macintosh by non-Apple employees could not help but endorse it in as breathless terms as those used by the Macintosh team-members themselves. Steven Levy’s Insanely Great, from 1994, is a document as remarkable for its wholesale endorsement of this new model of personal computing as any of the Macintosh advertisements and guide-books. Recalling his experience seeing a demonstration of a Macintosh in 1983, he writes:

Until that moment, when one said a computer screen “lit up,” some literary license was required…But we were so accustomed to it that we hardly even thought to conceive otherwise. We simply hadn’t seen the light. I saw it that day…By the end of the demonstration, I began to understand that these were things a computer should do. There was a better way (4).

The Macintosh was not simply one of several alternatives – it represented the unquestionably right way for computing. And even at the time of his writing that book, in 1993, he still declares that each time he turns on his Macintosh, he is reminded “of the first light I saw in Cupertino, 1983. It is exhilarating, like the first glimpse of green grass when entering a baseball stadium. I have essentially accessed another world, the place where my information lives. It is a world that one enters without thinking of it…an ephemeral territory perched on the lip of math and firmament” (5). But it is precisely the legacy of the unthinking, invisible nature of the so-called “user-friendly” Macintosh environment that has foreclosed on using computers for creativity and learning and that continues in contemporary multi-touch, gestural, and ubiquitous computing devices such as the iPad and the iPhone whose interfaces are touted as utterly invisible (and so their inner workings are de facto inaccessible).

References

“‘1984’ Apple Macintosh Commercial.” Youtube. 27 Aug. 2008. Web. 21 June 2012.

Apple Computer Inc. Apple Human Interface Guidelines: The Apple Desktop Interface. Reading, MA: Addison-Wesley, 1987.

Bardini, Thierry. Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal Computing. Stanford, CA: Stanford UP, 2000.

Chen, Jung-Wei and Jiajie Zhang. “Comparing Text-based and Graphic User Interfaces for Novice and Expert Users.” AMIA Annual Symposium Proceedings Archive. 2007. Web. 14 February 2012.

Chun, Wendy. Programmed Visions: Software and Memory. Boston, MA: MIT Press, 2011.

Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework.” in The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Boston, MA: MIT UP, 2003. 95-108.

—. “Workstation History and the Augmented Knowledge Workshop.” Doug Engelbart Institute. 2008. Web. 3 April 2011.

—, and William English. “A Research Center for Augmenting Human Intellect.” in The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Boston, MA: MIT UP, 2003. 233-246.

Erickson, Thomas D. “Interface and the Evolution of Pidgins: Creative Design for the Analytically Inclined.” In The Art of Human-Computer Interface Design. Ed. Brenda Laurel. Reading, MA: Addison-Wesley Publishing Company, Inc., 1990. 11-16

Gassée, Jean-Louis. The Third Apple: Personal Computers & the Cultural Revolution. San Diego, New York, London: Harcourt Brace Jovanovich Publishers, 1985.

Goldberg, Adele. “Introducing the Smalltalk-80 System.” Byte 6:8 (August 1981): 14-26.

Hertzfeld, Andy and Steve Capps et al. Revolution in the Valley. Sebastopol, CA: O’Reilly, 2005.

Ingalls, Daniel. “Design Principles Behind Smalltalk.” Byte 6:8 (August 1981): 286-298.

Johnson, Jeff and Theresa Roberts et al. “The Xerox Star: A Retrospective.” Computer 22:9 (September 1989): 11-29.

Johnson, Steven. Interface Culture: How New Technology Transforms the Way We Create and Communicate. New York: Basic Books, 1997.

Kay, Alan. “The Early History of Smalltalk.” Smalltalk dot org. Web. 5 April 2012.

—. “User Interface: A Personal View.” in The Art of Human-Computer Interface Design. Ed. Brenda Laurel. Reading, MA: Addison-Wesley Publishing Company, Inc., 1990. 191-207.

—, and Adele Goldberg. “Personal Dynamic Media.” in The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Boston, MA: MIT UP, 2003. 393-409.

Levy, Steven. Hackers: Heroes of the Computer Revolution. 25th Anniversary Edition. New York: O’Reilly Media, 2010.

—. Insanely Great: The Life and Times of Macintosh, the Computer that Changed Everything. New York: Viking, 1994.

Lewis, T.G. “Some Laws of Personal Computing.” Byte 4:10 (October 1979): 186-191.

Linden, Ted, Eric Harslem, Xerox Corporation. Office Systems Technology: A Look Into the World of the Xerox 8000 Series Products: Workstations, Services, Ethernet, and Software Development. Palo Alto, CA: Office Systems Division, 1982.

“LOGO.” Advertisement. Byte 7:2 (February 1982): 255.

Morgan, Chris and Gregg Williams, Phil Lemmons. “An Interview with Wayne Rosing, Bruce Daniels, and Larry Tesler: A Behind-the-scenes Look at the Development of Apple’s Lisa.” Reprinted from Byte magazine 8:2 (February 1983): 90-114. Web. 14 April 2012.

Nelson, Theodor. “Computer Lib / Dream Machines.” The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Cambridge, MA: MIT Press, 2003. 303-338.

Papert, Seymour. Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books, 1980.

Reenskaug, Trygve. “User-Oriented Descriptions of Smalltalk Systems.” Byte 6:8 (August 1981): 148-166.

Reimer, Jeremy. “Total share: 30 years of personal computer market share figures.” Ars Technica. 2006. Web. 4 December 2011.

Rutkowski, Chris. “An Introduction to the Human Applications Standard Computer Interface: Part 1: Theory and Principles.” Byte 7:10 (October 1982): 291-310.

—. “An Introduction to the Human Applications Standard Computer Interface: Part 2: Implementing the HASCI Concept. ” Byte 7:11 (November 1982): 379-390.

Smith, David Canfield and Charles Irby et al. “Designing the Star User Interface.” Byte 7:4 (April 1982): 242-282.

Tesler, Larry. “The Legacy of the Lisa.” Macworld magazine (September 1985): 17-22.

Wardrip-Fruin, Noah. “Introduction.” “A Research Center for Augmenting Human Intellect.” By Douglas Engelbart. in The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Boston, MA: MIT UP, 2003. 231-232.

“What is Logo?” The Logo Foundation. 2011. Web. 5 April 2012.

Whiteside, John and Sandra Jones, Paul S. Levy, Dennis Wixon. “User Performance with Command, Menu, and Iconic Interfaces.” CHI 1985 Proceedings. April 1985. 185-191.

Wilber, Mike and David Fylstra. “Homebrewery vs the Software Priesthood.” Byte 14 (October 1976): 90-94.

Williams, Gregg. “The Lisa Computer System: Apple Designs a New Kind of Machine.” Product Description. Byte 8:2 (February 1983): 33-50.

Wozniak, Steve. “The Apple-II.” System Description. Byte 2:5 (May 1977): 34-43.

“The whole world is faking it”: Computer-Generated Poetry as Linguistic Evidence

The following is a short review I wrote of discourse.cpp (pdf available here) by O.S. le Si, ed. Aurélie Herbelot, published by the Berlin-based Peer Press in 2011. The review was just published in the December issue of Computational Linguistics.

*

discourse.cpp (Peer Press, 2011) is a short collection of computer-generated poetry edited by computational linguistics scholar Aurélie Herbelot, produced by a computer called O.S. le Si mainly used for natural language processing, and named after a program which tries to identify the meanings of words based on their context. In this case, Herbelot inputted 200,000 pages from Wikipedia for the program to then parse and output lists of items whose context is similar to words such as “gender,” “love,” “family,” and “illness;” for example, Herbelot explains that content in the opening piece titled “the creation” was “selected out of a list of 10,000 entries. Each entry was produced by automatically looking for taxonomic relationships in Wikipedia”; and, for the piece titled “gender,” she chose the “twenty-five best contexts for man and woman in original order. No further changes.” (47) The collection is, then, as we are told on the back-cover, “about things that people say about things. It was written by a computer.”

Poets – or, for the sake of those still attached to the notion of an author who intentionally delivers well-crafted, expressive writing, “so-called poets” – have been experimenting with producing writing with the aid of digital computer algorithms since Max Bense and Theo Lutz first experimented with computer-generated writing in 1959. The most well-known English-language example is the 1984 collection of poems The Policeman’s Beard is Half-Constructed by the Artificial Intelligence program Racter (a collection which was, it was later discovered, heavily edited by Racter creators William Chamberlain and Thomas Etter). discourse.cpp is yet another experiment in testing the capabilities of the computer and computer-programmer to create not so much “good” poetry as revealing poetry – poetry that is not meant to be close-read (most often to discover underlying authorial intent) but rather read as a collection of a kind of linguistic evidence. In this case, the collection provides evidence of the computer program’s probings of trends in online human language usage which in turn, not surprisingly, provides evidence of certain prevailing cultural norms; for example, we can see quite clearly our culture’s continued attachment to heteronormative gender roles in “Gender”:

Woman                        Man
man love —                    — win title
— marry man                — love woman
— give birth                   — claim be (18)

More, this linguistic evidence also draws attention to the ever-increasing intertwinement of human and digital computer and the resulting displacement of the human as sole reader-writer now that the computer is also a reader-writer alongside (and often in collaboration with) the human.

As Herbelot rightly points out in the “Editor’s Foreword,” to a large extent this experimentation with the computer as reader-writer also comes out of early twentieth century, avant-garde writing that similarly sought to undermine, if not displace, the individual intending author. Dadaist Tristan Tzara, for instance, infamously wrote “TO MAKE A DADAIST POEM” in 1920 in which he advocates writing poetry by cutting out words from a newspaper article, randomly choosing these words from a hat, and then appropriating these randomly chosen words to create a poem by “an infinitely original author of charming sensibility.” Tzara was, of course, being typically Dadaist in his tongue-in-cheek attitude; but he was also, I believe, serious in his belief that the combination of appropriation and chance-generated methods of producing text could produce original writing that simultaneously undermined the egotism of the author. However, insofar as discourse.cpp comes out of a lineage of experimental writing invested in chance-generated writing and, later, in exploiting computer technology as the latest means by which to produce such writing, it also comes out of a certain tradition of disingenuousness that comes along with this lineage. No matter how much Tzara and later authors of computer-generated writing sought to remove the human-as-author, there was and still is no getting around the fact that humans are in fact deeply involved in the creation process – whether as cutters-and-pasters, computer programmers, inputters, or editors. The collection, then, is a much more complex amalgam than even Herbelot seems willing to acknowledge as discourse.cpp is evidence of the evenly distributed reading and writing that took place between Herbelot and the computer/program itself.

Media Archaeology and Digital Stewardship

I was fortunate to have the chance to think through the relationship between the field of media archaeology, the Media Archaeology Lab, and digital preservation/stewardship thanks to this interview with Trevor Owens on the Library of Congress blog, The Signal, called “Media Archaeology and Digital Stewardship: An Interview with Lori Emerson.” The invitation to talk with Trevor was particularly fortuitous because Matthew Kirschenbaum had been here at CU Boulder the week before, discussing these very same issues in a faculty seminar he led called “Doing Media Archaeology.” You can read the interview here – I’d be interested in hearing comments you might have, especially about the possibility of a hardware/software resource sharing program.

mobile poetics: a select bibliography of digital textuality/art apps

I’ve been building a bibliography for awhile now of digital textuality/art apps for the iPhone and iPad. The list below is far from complete but hopefully useful to those of you teaching students how to read and/or write digital textuality/art. Some link directly to the download page while others link to pages with information on particular apps. Please let me know if you have any other works you think I should add to this list.

latest addition to the Archeological Media Lab: original “First Screening” 5.25 inch floppies

I had the great fortune of meeting Lionel Kearns in Vancouver last spring and discussing bpNichol’s 1984 Apple IIe poem “First Screening.” (If you don’t know Kearns, he is a longtime Vancouver-based poet who was a student of Earle Birney and also one of the four people to first rescue “First Screening.”) After explaining that I had managed, with the assistance of Jim Andrews, to obtain copies of “First Screening” for the Archeological Media Lab to run on the Apple IIe’s, Kearns immediately and generously offered to donate original working copies of the poems that bp was working on when he visited Kearns in the early 80s. I’m thrilled to report the floppies arrived last week, safe and sound, with this note from Kearns: “I am not sure of the actual date, but it was some time previous to the actual publication on disk of the collection of poems by Underwhich Editions.”

MLA 2013 Special Session: Reading the Invisible and Unwanted in Old & New Media

[February 2013: I’ve posted an extended version of my MLA 2013 paper here.]

Below is the description for the MLA ’13 special session panel that Paul Benzon, Mark Sample, Zach Whalen, and I will present on in January. We’re thrilled to have the opportunity to pursue together issues related to Media Archaeology.

*

Media studies is growing increasingly visible within the broader disciplines of literary and cultural studies, with several critical approaches bringing valuable shape and context to the field. Prominent among these approaches is a turn away from media studies’ longstanding fixation upon the new or the innovative as the most urgent and deserving site of study. Drawing on methodologies as diverse as book history, media archaeology, and videogame studies, this work on earlier media technologies has forged provocative connections between past and present contexts that hinge upon disjuncture and nonlinearity as often as upon continuity and teleology. At the same time, an increased attention to the material particulars of inscription, storage, circulation, and reception has developed the field beyond an early focus on narrative and representation.

New media scholars now look beyond screen-based media, to a broader range of technologies and sites of inquiry. This panel seeks to consider unseen, lost, or unwanted histories of writing/media. Each of the panelists focuses on a particular technology that is not only invisible to the broad history of media technology, but also relies upon loss and invisibility for its very functionality. In keeping with this dual valence, our emphasis on loss and invisibility is intended to raise questions aimed at our specific objects of analysis, but also at the deeper historical and disciplinary questions that these objects speak to: how does our understanding of media technology change when we draw attention to objects and processes that are designed to be invisible, out of view, concealed within the machine, or otherwise beyond the realm of unaided human perception? What happens when we examine the technological, social, and ideological assumptions bound up with that invisibility? How does privileging invisibility shed new light on materiality, authorship, interface, and other central critical questions within media studies?

The vexing relationship between invisibility and transparency is addressed head-on in Lori Emerson’s paper, “Apple Macintosh and the Ideology of the User-Friendly.” Emerson suggests that the “user-friendly” graphical user interface (GUI) that was introduced via the Apple Macintosh in 1984 was–and still is–driven by an ideology that celebrates an invisible interface instead of offering users transparent access to the framing mechanisms of the interface as well as the underlying flow of information. Emerson asserts this particular philosophy of the user-friendly was a response to earlier models of home computers which were less interested in providing ready-made tools through an invisible interface and more invested in educating users and providing them with the means for tool-building. Thus, the Apple Macintosh model of the GUI is clearly related to contemporary interfaces that utterly disguise the ways in which they delimit not only our access to information but also what and how we read/write.

A desire to renew critical attention on the most taken-for-granted aspect of computer writing and reading is at the heart of Zach Whalen’s paper, “OCR and the Vestigial Aesthetics of Machine Vision.” Whalen examines the origins of the technology that allows machines to read and process alphanumeric characters. While graceful typography is said to work best when it is not noticed–in other words, when hidden in plain sight–early OCR fonts had to become less hidden in order to make their text available for machine processing. Whalen focuses on the OCR-A font and the contributions of OCR engineer Jacob Rabinow, who argued on behalf of ugly machine-readable type that (although historically and technically contingent) its intrinsically artificial geometry could become its own aesthetic signifier.

The condensation and invisibility of textual information is taken up by Paul Benzon in his paper, “Lost in Plain Sight: Microdot Technology and the Compression of Reading.” Benzon uses the analog technology of the microdot, in which an image of a standard page of text is reduced to the size of a period, as a framework to consider questions of textual and visual materiality in new media. Benzon’s discussion focuses on the work of microdot inventor Emanuel Goldberg, who in the fifties worked alongside and in competition with the engineer Vannevar Bush, a seminal figure for new media studies. Benzon transforms the disregarded history of textual storage present in Goldberg’s work into a counter-narrative to the more hegemonic ideology of hypertext that has dominated new media studies.

Turning to an entirely invisible process that we can only know by its product, Mark Sample considers the meaning of machine-generated randomness in electronic literature and videogames in his paper, “An Account of Randomness in Literary Computing.” While new media critics have looked at randomness as a narrative or literary device, Sample explores the nature of randomness at the machine level, exposing the process itself by which random numbers are generated. Sample shows how early attempts at mechanical random number generation grew out of the Cold War, and then how later writers and game designers relied on software commands like RND (in BASIC), which seemingly simplified the generation of random numbers, but which in fact were rooted in–and constrained by–the particular hardware of the machine itself.

These four papers share a common impulse, which is to imagine alternate or supplementary media histories that intervene into existing scholarly discussions. By focusing on these forgotten and unseen dimensions, we seek to complicate and enrich the ways in which literary scholars understand the role of technologies of textual production within contemporary practices of reading and writing. With timed talks of 12 minutes each, the session sets aside a considerable amount of time for discussion. This panel will build on a growing conversation among MLA members interested in theoretically inflected yet materially specific work on media technologies, and it will also appeal to a broad cross-section of the MLA membership, including textual scholars, digital humanists, literary historians, electronic literature critics, and science and technology theorists.