“Computers and the Arts”, Dick Higgins (1968)

from “Computers for the Arts,” by Dick Higgins

About a year ago, I was working on the third chapter of Reading Writing Interfaces – “Typewriter Concrete Poetry as Activist Media Poetics” – during which I discovered, among other things, the mutual influence of concrete poetry and Marshall McLuhan. One figure I promised myself I needed to research further once I’d finished my book was Dick Higgins – self-proclaimed ‘intermedia poet’ and publisher of Something Else Press. Higgins, I found, was one of the most obviously influenced by McLuhan, no doubt in large part because Higgins published McLuhan’s Verbi-Voco-Visual Explorations the same year as his press published the first major anthology of concrete poetry, Emmett Williams’ An Anthology of Concrete Poetry. Invested as he was in poetry that situates itself between two or more inseparable media, Higgins’ notion of intermedia is obviously saturated with McLuhan’s notions of the new electric age and the global village; as he wrote in his “Statement on Intermedia” in 1966, the year before publishing the two volumes by McLuhan and Emmett:

Could it be that the central problem of the next ten years or so, for all artists in all possible forms, is going to be less the still further discovery of new media and intermedia, but of the new discovery of ways to use what we care about both appropriately and explicitly?

Higgins also published his own pamphlet, “Computers for the Arts,” in 1970 (written in 1968, pdf available here) which I’ve just now had a chance to track down and scan. What interests me most about this little pamphlet is how it anticipates so much of the digital art/writing and network art/writing to come in the next forty+ years–experiments in using computers against themselves, or against what Higgins describes in 1970 as their economic uses in science and business. “However,” he writes, “their uses are sufficiently versatile to justify looking into a number of the special techniques for the solution of creative problems.” In “Computers for the Arts,” he goes on to explore how FORTRAN in particular can be used to generate poems, scenarios, what he calls “propositions” that can work through these creative problems in, for example, “1.64 minutes, as opposed to the 16 hours needed to make the original typewritten version.” But the larger point is about understanding tools as processes, just as Alan Kay, Ted Nelson and others advocated for throughout the 1970s and 1980s:

When the artist is able to eliminate his irrational attitudes (if any) about the mythology of computers, and becomes willing not simply to dump his fantasies in the lap of some startled engineer, but to supply the engineer with:

  1. the rudiments of his program in such a language as FORTRAN or one of the other very common ones;
  2. a diagram of the logic of his program, such as I just used to illustrate…
  3. a page or so of how he would like the printout to look

then he will be in a position to use the speed and accuracy of computers. There will be few of the present disappointments, which are due usually more often to the artist’s naivete than to the engineer’s lack of information or good will. The onus is on the artist, not his tools, to do good work.

Here, then, is a pdf of “Computers for the Arts.” Enjoy!


From the Philosophy of the Open to the Ideology of the User-Friendly

Below is an excerpt from chapter two, “From the Philosophy of the Open to the Ideology of the User-Friendly,” from my book Reading Writing Interfaces: From the Digital to the Bookbound (University of Minnesota Press 2014). It is also the basis of the talk I gave at MLA in January 2013 and the full version of the talk I gave at Counterpath Press February 2013. As always, I welcome your comments!


“Knowledge is power: information is the fabric of knowledge; the controller of information wields power.”
–“Some Laws of Personal Computing,”
Byte 1979 (Lewis 191)

“If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual…Any barrier that exists between the user and some part of the system will eventually be a barrier to creative expression. Any part of the system that cannot be changed or that is not sufficiently general is a likely source of impediment.”
–“Design Principles Behind Smalltalk,” Byte 1981(Ingalls 286)

My talk today is concerned with a decade in which we can track the shift from seeing a user-friendly computer as a tool that, through a graphical user interface (GUI), encourages understanding, tinkering, and creativity to seeing a user-friendly computer that uses a GUI to create an efficient work-station for productivity and task-management and the effect of this shift particularly on digital literary production. The turn from computer systems based on the command-line interface to those based on “direct manipulation” interfaces that are iconic or graphical was driven by rhetoric that insisted the GUI, particularly that pioneered by the Apple Macintosh design team, was not just different from the command-line interface but it was naturally better, easier, friendlier. The Macintosh was, as Jean-Louis Gassée (who headed up its development after Steve Jobs’s departure in 1985) writes without any hint of irony, “the third apple,” after the first apple in the Old Testament and the second apple that was Isaac Newton’s, “the one that widens the paths of knowledge leading toward the future.”

Despite studies released since 1985 that clearly demonstrate GUIs are not necessarily better than command-line interfaces in terms of how easy they are to learn and to use, Apple – particularly under Jobs’ leadership – successfully created such a convincing aura of inevitable superiority around the Macintosh GUI that to this day the same “user-friendly” philosophy, paired with the no longer noticed closed architecture, fuels consumers’ religious zeal for Apple products. I have been an avid consumer of Apple products since I owned my first Macintosh Powerbook in 1995; but what concerns me is that ‘user-friendly’ now takes the shape of keeping users steadfastly unaware and uninformed about how their computers, their reading/writing interfaces, work let alone how they shape and determine their access to knowledge and their ability to produce knowledge. As Wendy Chun points out, the user-friendly system is one in which users are, on the one hand, given the ability to “map, to zoom in and out, to manipulate, and to act” but the result is a “seemingly sovereign individual” who is mostly a devoted consumer of ready-made software and ready-made information whose framing and underlying mechanisms we are not privy to.

However, it’s not necessarily the GUI per se that is responsible for the creation of Chun’s “seemingly sovereign individual” but rather a particular philosophy of computing and design underlying a model of the GUI that has become the standard for nearly all interface design. The earliest example of a GUI-like interface whose philosophy is fundamentally different from that of the Macintosh is Douglas Engelbart’s NLS or “oN-Line System” which he began work on in 1962 and famously demonstrated in 1968. While his “interactive, multi-console computer-display system” with keyboard, screen, mouse, and something he called a chord handset is commonly cited as the originator of the GUI, Engelbart wasn’t so much interested in creating a user-friendly machine as he was invested in “augmenting human intellect”. As he first put it in 1962, this augmentation meant “increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems”. The NLS was not about providing users with ready-made software and tools from which they choose or consume but rather it was about bootstrapping, or “the creation of tools for expert computer users” and providing the means for users to create better tools, or tools better suited for their own individual needs. We can see this emphasis on tool-building and customization that comes out of an augmented intellect in Engelbart’s provision of “view-control” (which allows users to determine how much text they see on the screen as well as the form of that view) and “chains of views” (which allows the user to link related files) in his document editing program.

Underlining the fact that the history of computing is resolutely structured by stops, starts, and ruptures rather than a series of linear firsts, in the year before Engelbart gave his “mother of all demos,” Seymour Papert and Wally Feurzeig began work on a learning-oriented programming language they called ‘Logo’ that was explicitly for children but implicitly for learners of all ages. Throughout the 1970s Papert and his team at MIT conducted research with children in nearby schools as they tried to create a version of Logo that was defined by “modularity, extensibility, interactivity, and flexibility”. At this time, the Apple II was the most popular home computer throughout the late 1970s until the mid-1980s and, given its open architecture, in 1977 Logo licensed a public version for Apple II computers as well as for the less popular Texas Instruments TI 99/4. In 1980, Papert published the decidedly influential Mindstorms: Children, Computers, and Powerful Ideas in which he makes claims about the power of computers that are startling for a contemporary readership steeped in an utterly different notion of what accessible or user-friendly computing might mean. Describing his vision of “computer-aided instruction” in which “the child programs the computer” rather than one in which the child adapts to the computer or even is taught by the computer, Papert asserts that they thereby “embark on an exploration about how they themselves think…Thinking about thinking turns the child into an epistemologist, an experience not even shared by most adults” (19). And two years later, in a February 1982 issue of Byte magazine, Logo is advertised as a general-purpose tool for thinking with a degree of intellectuality rare for any advertisement: “Logo has often been described as a language for children. It is so, but in the same sense that English is a language for children, a sense that does not preclude its being ALSO a language for poets, scientists, and philosophers”. Moreoever, for Papert thinking about thinking by way of programming happens largely when the user encounters bugs in the system and has to then identify where the bug is to then remove it: “One does not expect anything to work at the first try. One does not judge by standards like ‘right – you get a good grade’ and ‘wrong – you get a bad grade.’ Rather one asks the question: ‘How can I fix it?’ and to fix it one has first to understand what happened in its own terms.” (101) Learning through doing, tinkering, experimentation, trial-and-error is, then, how one comes to have a genuine computer literacy.

In the year after Papert et al began work on Logo and the same year as Engelbart’s NLS demo, Alan Kay also commenced work on the never-realized Dynabook, produced as an “interim Dynabook” in 1972 in the form of the GUI-based Xerox Alto which could also run the Smalltalk language. Kay thereby introduced the notion of “personal dynamic media” for “children of all ages” which “could have the power to handle virtually all of its owner’s information-related needs”. Kay, then, along with Engelbart and Papert, understood very clearly the need for computing to move from the specialized environment of the research lab and into people’s homes by way of a philosophy of the user-friendly oriented toward the flexible production (rather than rigid consumption) of knowledge. It was a realization eventually shared by the broader computing community for, by 1976, Byte magazine was publishing editorials such as “Homebrewery vs the Software Priesthood” declaring that “the movement towards personalized and individualized computing is an important threat to the aura of mystery that has surrounded the computer for its entire history” (90). And more:

The movement of computers into people’s homes makes it important for us personal systems users to focus our efforts toward having computers do what we want them to do rather than what someone else has blessed for us…When computers move into peoples’ homes, it would be most unfortunate if they were merely black boxes whose internal workings remained the exclusive province of the priests…Now it is not necessary that everybody be a programmer, but the potential should be there…(90).

image1from “Homebrewery vs the Software Priesthood,” Byte magazine October 1976

It was precisely the potential for programming or simply novice as well as expert use via an open, extensible, and flexible architecture that Engelbart, Papert and Kay sought to build into their models of the personal computer to ensure that home computers did not become “merely black boxes whose internal workings remained the exclusive province of the priests.” By contrast, as Kay later exhorted his readers in 1977, “imagine having your own self-contained knowledge manipulator in a portable package the size and shape of an ordinary notebook”. Designed to have a keyboard, an NLS-inspired “chord” keyboard, mouse, display, and windows, the Dynabook would allow users to realize Engelbart’s dream of a computing device that gave them the ability to create their own ways to view and manipulate information. Rather than the over-determined post-Macintosh GUI computer which has been designed to pre-empt every user’s possible need with the creation of an over-abundance of ready-made tools such that “those who wish to do something different will have to put in considerable effort,” Kay wanted a machine that was “designed in a way that any owner could mold and channel its power to his own needs…a metamedium, whose content would be a wide range of already-existing and not-yet-invented media” (403). More, Kay understood from reading Marshall McLuhan, that the design of this new metamedium was no small matter for the very use of a medium changes an individual’s, a culture’s, thought patterns. Clearly, he wanted thought patterns to move toward a literacy that involved reading and writing in the new medium instead of the unthinking consumption of ready-made tools, for, crucially, “the ability to ‘read’ a medium means you can access materials and tools created by others. The ability to ‘write’ in a medium means you can generate materials and tools for others. You must have both to be literate”.

While Kay envisioned the GUI-like interface of the Dynabook would play a crucial role in realizing this “metamedium,” the Smalltalk software driving this interface was equally necessary. Its goal was “to provide computer support for the creative spirit in everyone” (286). Not surprisingly, Kay and his collaborators began working intensely with children after the creation of Smalltalk-71. Influenced by developmental psychologist Jean Piaget as well as Kay’s own observation of Papert and his colleagues’ use of Logo in 1968, Smalltalk relied heavily on graphics and animation through one particular incarnation of the GUI: the Windows, Icons, Menus, and Pointers (or WIMP) interface. Kay writes that in the course of observing Papert using Logo in schools, he realized that these were children “doing real programming…”:

  …this encounter finally hit me with what the destiny of personal computing really was going to be. Not a personal dynamic vehicle, as in Engelbart’s metaphor opposed to the IBM “railroads”, but something much more profound: a personal dynamic medium. With a vehicle one could wait until high school and give “drivers ed”, but if it was a medium, it had to extend into the world of childhood (“The Early History” 81).

As long as the emphasis in computing was on learning – especially through making and doing – the target demographic was going to be children and as long as children could use the system, then so too could any adult provided they understood the underlying structure, the how and the why, of the programming language. As Kay astutely points out, “…we make not just to have, but to know. But the having can happen without most of the knowing taking place”. And, as he goes on to point out, designing the Smalltalk user interface shifted the purpose of interface design from “access to functionality” to an “environment in which users learn by doing” (84). And so Smalltalk designers didn’t so much completely reject the notion of readymade software so much as they sought to provide the user with a set of software building blocks from which the user could then combine and/or edit to create their own customized system. Or, as Trygve Reenskaug (a visiting Norwegian computer scientist with the Smalltalk group at Xerox PARC in the late 1970s) put it:

 …the new user of a Smalltalk system is likely to begin by using its ready-made  application systems for writing and illustrating documents, for designing aircraft wings, for doing homework, for searching through old court decisions, for composing music, or whatever. After a while, he may become curious as to how his system works. He should then be able to “open up” the application object on the screen to see its component parts and to find out how they work together (166).

With an emphasis on learning and building through an open architecture, Adele Goldberg – co-developer of Smalltalk along with Alan Kay and author of most of the Smalltalk documentation – describes the Smalltalk programming environment in this special issue of Byte as one that set out to defy that of the conventional software development environment as illustrated in Figure 1 below:


Image by Adele Goldberg contrasting the conventional philosophy of software driven by “wizards” in Figure 1 versus that provided by Smalltalk for the benefit of the programmer/user in Figure 2.

The Taj Mahal in Figure 1 “represents a complete programming environment, which includes the tools for developing programs as well as the language in which the programs are written. The users must walk whatever bridge the programmer builds” (Goldberg 18). Figure 2, by contrast, represents a Taj Mahal in which the “software priest” is transformed into one who merely provides the initial shape of the environment which programmers can then modify by building “application kits” or “subsets of the system whose parts can be used by a nonprogrammer to build a customized version of the application” (18). The user or non-programmer, then, is an active builder in dialogue with the programmer instead of a passive consumer of a pre-determined (and perhaps even over-determined) environment.

At roughly the same time as Kay began work on Smalltalk in the early 1970s, he was also involved with the team of designers working on the NLS-inspired Xerox Alto which was developed in 1973 as, again, an “interim Dynabook” with a three-button mouse, a GUI which worked in conjunction with the desktop metaphor, and ran Smalltalk. While only several thousand non-commercially available Altos were manufactured, it was – as team members Chuck Thacker and Butler Lampson believe – probably the first computer explicitly called a “personal computer” because of its GUI and its network capabilities. By 1981, Xerox had designed and produced a commercially available version of the Alto, called the 8010 Star Information System, which was sold along with Smalltalk-based software. But as Jeff Johnson et al point out, the most important connection between Smalltalk and the Xerox Star lay in the fact that Smalltalk could clearly illustrate the compelling appeal of a graphical display that the user accessed via mouse, overlapping windows, and icons (22).


Screenshot of Xerox Star from Jeff Johnson et al’s “The Xerox Star: A Retrospective.”

However, the significance of the Star is partly the indisputable impact it had on the GUI design of first the Apple Lisa and then the Macintosh; its significance is also in the way in which it was clearly labeled a work-station for “business professionals who handle information” rather than a metamedium or a tool for creating or even thinking about thinking. And in fact, the Star’s interface – which was the first commercially available computer born out of work by Engelbart, Papert and Kay that attempted to satisfy both novice and expert users in providing an open, extensible, flexible environment and that also happened to be graphical – was conflicted at its core. While in some ways the Star was philosophically very much in line with the open thinking of Engelbart, Papert, and Kay, in other ways its philosophy as much as its GUI directly paved the way to the closed architecture and consumption-based design of the Macintosh. Take for example the overall design principles of the Star which were aimed at making the system seem “familiar and friendly.”

Easy                             Hard

concrete                     abstract
visible                         invisible
copying                      creating
choosing                    filling in
recognizing               generating
editing                        programming
interactive                 batch

Star designers also avowed to avoid the characteristics they list on the right while adhering to a schema that exemplifies the characteristics listed on the left. While there’s little doubt that ease-of-use was of central importance to Engelbart, Papert and Kay – often brought about through interactivity and making computer operations and commands visible – the avoidance of “creating,” “generating,” or “programming” couldn’t be further from their vision of the future of computing. At the same time as the Star forecloses on creating, generating, and programming through its highly restrictive set of commands in the name of simplicity, it also wants to promote users’ understanding of the system as a whole – although, again, we can see this particular incarnation of the GUI represents the beginning of a shift toward only a superficial understanding of the system. Without a fully open, flexible, and extensible architecture, the home computer becomes less a tool for learning and creativity and more a tool for simply “handling information.”

By contrast, as I’ll now talk about, the Apple Macintosh was clearly designed for consumers, not creators. It was marketed as a democratizing machine when in fact it was democratizing only insofar as it marked a profound shift in personal computing away from the sort of inside-out know-how one needed to create on an Apple II to the kind of perfunctory know-how one needed to navigate the surface of the Macintosh – one that amounts to the kind of knowledge needed to click this or that button. The Macintosh was democratic only in the manner any kitchen appliance is democratic. That said, Apple’s redefinition of the overall philosophy of personal computing exemplifies just one of many reversals that abound in this ten year period from the mid-1970s to the mid-1980s. In relation to the crucial change that took place in the mid-1980s from open, flexible, and extensible computing systems for creativity to ones that were closed, transparent, and task-oriented, the way in which the Apple Macintosh was framed from the time of its release in January 1984 represented a near complete purging of the philosophy promoted by Engelbart, Kay, and Papert. This purging of the recent past took place under the guise of Apple’s version of the user-friendly that, among other things, pitted itself against the supposedly “cryptic,” arcane,” “phosphorescent heap” that was the command-line interface as well as, it was implied, any earlier incarnation of the GUI.

However, it’s important to note that, while the Macintosh philosophy purged much of what had come before, it did in fact emerge from the momentum gathering in other parts of the computing industry which were particularly concerned to define standards for the computer interface. Up to this point, personal computers were remarkably different from each other. Commodore 64 computers, for example, came with both a ‘Commodore’ key that gave the user access to an alternate character set as well as four programmable function keys that, with the shift button, could each be programmed for two different functions. By contrast, Apple II computers came with two programmable function keys and Apple III, IIc and IIe computers came with open-Apple and closed-Apple keys that provided the user with shortcuts to applications such as cut-and-paste or copy (in the same way that the contemporary ‘command’ key functions).

No doubt in response to the difficulties this variability posed to expanding the customer base for personal computers, Byte magazine ran a two-part series in October and November 1982 dedicated to the issue of industry standards by way of an introduction to a proposed uniform interface called the “Human Applications Standard Computer Interface” (or HASCI). Asserting the importance of turning the computer into a “consumer product,” author Chris Rutkowski declares that every computer ought to have a “standard, easy-to-use format” that “approaches one of transparency. The user is able to apply intellect directly to the task; the tool itself seems to disappear” (291, 299-300). Of course, a computer that is easy-to-use is entirely desirable; however, at this point ease-of-use is framed in terms of the disappearance of the tool being used in the name of ‘transparency ‘ – which now means usersfwhi can efficiently accomplish their tasks with the help of a glossy surface that shields them from the depths of the computer instead of the earlier notion of ‘transparency’ which referred to a usesr’s ability to open up the hood of the computer to understand directly its inner workings.

Thus, no doubt in a bid to finally produce a computer that realized these ideas and appealed to consumers who are “drivers, not repairmen,” Apple unveiled the Lisa in June 1983 for nearly $10,000 (that’s $23,000 in 2012 dollars) as a cheaper and more user-friendly version of the Xerox Alto/Star which sold for $16,000 in 1981 (which is about $40,000). At least partly inspired by Larry Tesler’s Xerox PARC 1979 demo of the Star to Steve Jobs, the Lisa used a one-button mouse, overlapping windows, pop-up menus, a clipboard, and a trashcan. As Tesler was adamant to point out in a 1985 article on the “Legacy of the Lisa,” it was “the first product to let you drag [icons] with the mouse, open them by double-clicking, and watch them zoom into overlapping windows” (17). The Lisa, then, moved that much closer to the realization of the dream of transparency with, for example, its mode of double-clicking that attempted to have users develop the quick, physical action of double-clicking that bypasses the intellect through physical habit; more, its staggering two 2048K worth of software and three expansion slots also firmly moved it in the direction of a readymade, closed consumer product and definitively away from the Apple II, which, when it was first released in 1977, came with 16K bytes of code and, again, eight expansion slots.

Expansion slots symbolize the direction that computing was to take from the moment the Lisa was released, followed by the release of the Macintosh in January 1984, to the present day. Jeff Raskin, who originally began the Macintosh project in 1979, and Steve Jobs both believed that hardware expandability was one of the primary obstacles in the way of personal computing having a broader consumer appeal. In short, expansion slots made standardization impossible (partly because software writers needed consistent underlying hardware to produce widely functioning products) whereas what Raskin and Jobs both sought was a system which was an “identical, easy-to-use, low-cost appliance computer.” At this point, customization is no longer in the service of building, creating or learning – it is, instead, for using the computer as one would any home appliance and ideally this customization is only possible through software that the user drops into the computer via disk just as they would a piece of bread into a toaster. Predictably, then, the original plan for the Macintosh had it tightly sealed so that the user was only free to use the peripherals on the outside of the machine. While team-member Burrell Smith managed to convince Jobs to allow him to add in slots for users to expand the machine’s RAM, Macintosh owners were still “sternly informed that only authorized dealers should attempt to open the case. Those flouting this ban were threatened with a potentially lethal electric shock”.

That Apple could successfully gloss over the aggressively closed architecture of the Macintosh while at the same time market it as a democratic computer “for the people” marks just one more remarkable reversal from this period in the history of computing. As is clear in the advertisement below that came out in Newsweek Magazine during the 1984 election cycle, the Macintosh computer was routinely touted as embodying the principle of democracy. While it was certainly more affordable than the Lisa (in that it sold for the substantially lower price of $2495), its closed architecture and lack of flexibility could still easily allow one to claim it represented a decidedly undemocratic turn in personal computing.

Thus, 1984 became the year that Apple’s philosophy of the computer-as-appliance, encased in an aesthetically pleasing exterior, flowered into an ideology. We can partly see how their ideology of the user-friendly came to fruition through their marketing campaign which included a series of magazine ads such as the one below—


Advertisement for the Apple Macintosh from the November/December 1984 issue of Newsweek Magazine.

—along with one of the most well-known TV commercials of the late twentieth century.In the case of the latter, Apple takes full advantage of the powerful resonance still carried by George Orwell’s dystopian, post-World War II novel 1984 by reassuring us in the final lines of the commercial that aired on 22 January 1984 that “On January 24th Apple Computer will introduce Macintosh. And you’ll see why 1984 won’t be like ‘1984.’”

Apple positions Macintosh, then, as a tool for and of democracy while also pitting the Apple philosophy against a (non-existent) ‘other’ (perhaps communist, perhaps IBM or ‘Big Blue’) who is attempting to oppress us with an ideology of bland sameness. Apple’s ideology, then, “saves us” from a vague and fictional, but no less threatening, Orwellian, and nightmarish ideology. As lines of robot-like people, all dressed in identical grey, shapeless clothing march into the opening scene of the commercial, a narrator of this pre-Macintosh nightmare appears on a screen before them in something that appears to be a propaganda film. We hear, spoken fervently, “Today we celebrate the first glorious anniversary of the Information Purification Directives.” And, as Apple’s hammer-thrower then enters the scene, wearing bright red shorts and pursued by soldiers, the narrator of the propaganda film continues:

We have created for the first time in all history a garden of pure ideology, where each worker may bloom, secure from the pests of any contradictory true thoughts. Our Unification of Thoughts is more powerful a weapon than any fleet or army on earth. We are one people, with one will, one resolve, one cause. Our enemies shall talk themselves to death and we will bury them with their own confusion.

And just before the hammer is thrown at the film-screen, causing a bright explosion that stuns the grey-clad viewers, the narrator finally declares, “We shall prevail!” But who exactly is the hammer-thrower-as-underdog fighting against? Who shall prevail – Apple or Big Brother? Who is warring against whom in this scenario and why? In the end, all that matters is that, at this moment, just two days before the official release of the Macintosh, Apple has created a powerful narrative of its unquestionable, even natural superiority over other models of computing that continues well into the twenty-first century. It is an ideology that of course masks itself as such and that is born out of the creation of and then opposition to a fictional, oppressive ideology we users/consumers need to be saved from. In this context, the fervor with which even Macintosh team-members believed in the rightness and goodness of their project is somewhat less surprising as they were quoted in Esquire earnestly declaring, “Very few of us were even thirty years old…We all felt as though we had missed the civil rights movement. We had missed Vietnam. What we had was the Macintosh”.

Even non-fiction accounts of the Macintosh by non-Apple employees could not help but endorse it in as breathless terms as those used by the Macintosh team-members themselves. Steven Levy’s Insanely Great, from 1994, is a document as remarkable for its wholesale endorsement of this new model of personal computing as any of the Macintosh advertisements and guide-books. Recalling his experience seeing a demonstration of a Macintosh in 1983, he writes:

Until that moment, when one said a computer screen “lit up,” some literary license was required…But we were so accustomed to it that we hardly even thought to conceive otherwise. We simply hadn’t seen the light. I saw it that day…By the end of the demonstration, I began to understand that these were things a computer should do. There was a better way (4).

The Macintosh was not simply one of several alternatives – it represented the unquestionably right way for computing. And even at the time of his writing that book, in 1993, he still declares that each time he turns on his Macintosh, he is reminded “of the first light I saw in Cupertino, 1983. It is exhilarating, like the first glimpse of green grass when entering a baseball stadium. I have essentially accessed another world, the place where my information lives. It is a world that one enters without thinking of it…an ephemeral territory perched on the lip of math and firmament” (5). But it is precisely the legacy of the unthinking, invisible nature of the so-called “user-friendly” Macintosh environment that has foreclosed on using computers for creativity and learning and that continues in contemporary multi-touch, gestural, and ubiquitous computing devices such as the iPad and the iPhone whose interfaces are touted as utterly invisible (and so their inner workings are de facto inaccessible).


“‘1984’ Apple Macintosh Commercial.” Youtube. 27 Aug. 2008. Web. 21 June 2012.

Apple Computer Inc. Apple Human Interface Guidelines: The Apple Desktop Interface. Reading, MA: Addison-Wesley, 1987.

Bardini, Thierry. Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal Computing. Stanford, CA: Stanford UP, 2000.

Chen, Jung-Wei and Jiajie Zhang. “Comparing Text-based and Graphic User Interfaces for Novice and Expert Users.” AMIA Annual Symposium Proceedings Archive. 2007. Web. 14 February 2012.

Chun, Wendy. Programmed Visions: Software and Memory. Boston, MA: MIT Press, 2011.

Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework.” in The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Boston, MA: MIT UP, 2003. 95-108.

—. “Workstation History and the Augmented Knowledge Workshop.” Doug Engelbart Institute. 2008. Web. 3 April 2011.

—, and William English. “A Research Center for Augmenting Human Intellect.” in The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Boston, MA: MIT UP, 2003. 233-246.

Erickson, Thomas D. “Interface and the Evolution of Pidgins: Creative Design for the Analytically Inclined.” In The Art of Human-Computer Interface Design. Ed. Brenda Laurel. Reading, MA: Addison-Wesley Publishing Company, Inc., 1990. 11-16

Gassée, Jean-Louis. The Third Apple: Personal Computers & the Cultural Revolution. San Diego, New York, London: Harcourt Brace Jovanovich Publishers, 1985.

Goldberg, Adele. “Introducing the Smalltalk-80 System.” Byte 6:8 (August 1981): 14-26.

Hertzfeld, Andy and Steve Capps et al. Revolution in the Valley. Sebastopol, CA: O’Reilly, 2005.

Ingalls, Daniel. “Design Principles Behind Smalltalk.” Byte 6:8 (August 1981): 286-298.

Johnson, Jeff and Theresa Roberts et al. “The Xerox Star: A Retrospective.” Computer 22:9 (September 1989): 11-29.

Johnson, Steven. Interface Culture: How New Technology Transforms the Way We Create and Communicate. New York: Basic Books, 1997.

Kay, Alan. “The Early History of Smalltalk.” Smalltalk dot org. Web. 5 April 2012.

—. “User Interface: A Personal View.” in The Art of Human-Computer Interface Design. Ed. Brenda Laurel. Reading, MA: Addison-Wesley Publishing Company, Inc., 1990. 191-207.

—, and Adele Goldberg. “Personal Dynamic Media.” in The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Boston, MA: MIT UP, 2003. 393-409.

Levy, Steven. Hackers: Heroes of the Computer Revolution. 25th Anniversary Edition. New York: O’Reilly Media, 2010.

—. Insanely Great: The Life and Times of Macintosh, the Computer that Changed Everything. New York: Viking, 1994.

Lewis, T.G. “Some Laws of Personal Computing.” Byte 4:10 (October 1979): 186-191.

Linden, Ted, Eric Harslem, Xerox Corporation. Office Systems Technology: A Look Into the World of the Xerox 8000 Series Products: Workstations, Services, Ethernet, and Software Development. Palo Alto, CA: Office Systems Division, 1982.

“LOGO.” Advertisement. Byte 7:2 (February 1982): 255.

Morgan, Chris and Gregg Williams, Phil Lemmons. “An Interview with Wayne Rosing, Bruce Daniels, and Larry Tesler: A Behind-the-scenes Look at the Development of Apple’s Lisa.” Reprinted from Byte magazine 8:2 (February 1983): 90-114. Web. 14 April 2012.

Nelson, Theodor. “Computer Lib / Dream Machines.” The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Cambridge, MA: MIT Press, 2003. 303-338.

Papert, Seymour. Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books, 1980.

Reenskaug, Trygve. “User-Oriented Descriptions of Smalltalk Systems.” Byte 6:8 (August 1981): 148-166.

Reimer, Jeremy. “Total share: 30 years of personal computer market share figures.” Ars Technica. 2006. Web. 4 December 2011.

Rutkowski, Chris. “An Introduction to the Human Applications Standard Computer Interface: Part 1: Theory and Principles.” Byte 7:10 (October 1982): 291-310.

—. “An Introduction to the Human Applications Standard Computer Interface: Part 2: Implementing the HASCI Concept. ” Byte 7:11 (November 1982): 379-390.

Smith, David Canfield and Charles Irby et al. “Designing the Star User Interface.” Byte 7:4 (April 1982): 242-282.

Tesler, Larry. “The Legacy of the Lisa.” Macworld magazine (September 1985): 17-22.

Wardrip-Fruin, Noah. “Introduction.” “A Research Center for Augmenting Human Intellect.” By Douglas Engelbart. in The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Boston, MA: MIT UP, 2003. 231-232.

“What is Logo?” The Logo Foundation. 2011. Web. 5 April 2012.

Whiteside, John and Sandra Jones, Paul S. Levy, Dennis Wixon. “User Performance with Command, Menu, and Iconic Interfaces.” CHI 1985 Proceedings. April 1985. 185-191.

Wilber, Mike and David Fylstra. “Homebrewery vs the Software Priesthood.” Byte 14 (October 1976): 90-94.

Williams, Gregg. “The Lisa Computer System: Apple Designs a New Kind of Machine.” Product Description. Byte 8:2 (February 1983): 33-50.

Wozniak, Steve. “The Apple-II.” System Description. Byte 2:5 (May 1977): 34-43.

D.I.Y. Typewriter Art


Download the pdf here.

This lovely oddity arrived in the mail yesterday – Bob Neill’s Book of Typewriter Art (with special computer program) from 1982. It’s so difficult to capture its lovely oddness is just a few sentences or images so I decided to scan the entirety of the book and make it available here (pdf). Inside you’ll find line-by-line instructions for creating charming portraits of everything from the British royal family to siamese cats and even Kojak.


I’ve long been interested in the way writers in the 1960s and 1970s were – once the typewriter had thoroughly become commonplace – finding ways to play with the limits and possibilities of this machine as a writing medium. I’ve also thought that we can look back on typestracts such as Steve McCaffery’s Carnival and see it as informed by a D.I.Y. and hacking sensibility. While this book of typewriter art is clearly invested in representationality and not particularly experimental, its content is entirely a D.I.Y. guide to creating typewriter art and is very much like computer magazines from the early 1980s such as Byte that would include BASIC programs. Here, instead of computer code, we’re given typewritten letters as code.  And in fact, the book includes an appendix with a Microsoft BASIC program for creating a “Prince Charles Portrait”, programmed for the Commodore PET. And since the second appendix is a chart showing “sizes of paper required for each picture on different kinds of typewriter,” I can’t help thinking this book is a unique artifact in that it’s entirely framed by the appearance of the personal computer – a book on a soon-to-be-outdated technology framed by its impending replacement by a new technology.


“The whole world is faking it”: Computer-Generated Poetry as Linguistic Evidence

The following is a short review I wrote of discourse.cpp (pdf available here) by O.S. le Si, ed. Aurélie Herbelot, published by the Berlin-based Peer Press in 2011. The review was just published in the December issue of Computational Linguistics.


discourse.cpp (Peer Press, 2011) is a short collection of computer-generated poetry edited by computational linguistics scholar Aurélie Herbelot, produced by a computer called O.S. le Si mainly used for natural language processing, and named after a program which tries to identify the meanings of words based on their context. In this case, Herbelot inputted 200,000 pages from Wikipedia for the program to then parse and output lists of items whose context is similar to words such as “gender,” “love,” “family,” and “illness;” for example, Herbelot explains that content in the opening piece titled “the creation” was “selected out of a list of 10,000 entries. Each entry was produced by automatically looking for taxonomic relationships in Wikipedia”; and, for the piece titled “gender,” she chose the “twenty-five best contexts for man and woman in original order. No further changes.” (47) The collection is, then, as we are told on the back-cover, “about things that people say about things. It was written by a computer.”

Poets – or, for the sake of those still attached to the notion of an author who intentionally delivers well-crafted, expressive writing, “so-called poets” – have been experimenting with producing writing with the aid of digital computer algorithms since Max Bense and Theo Lutz first experimented with computer-generated writing in 1959. The most well-known English-language example is the 1984 collection of poems The Policeman’s Beard is Half-Constructed by the Artificial Intelligence program Racter (a collection which was, it was later discovered, heavily edited by Racter creators William Chamberlain and Thomas Etter). discourse.cpp is yet another experiment in testing the capabilities of the computer and computer-programmer to create not so much “good” poetry as revealing poetry – poetry that is not meant to be close-read (most often to discover underlying authorial intent) but rather read as a collection of a kind of linguistic evidence. In this case, the collection provides evidence of the computer program’s probings of trends in online human language usage which in turn, not surprisingly, provides evidence of certain prevailing cultural norms; for example, we can see quite clearly our culture’s continued attachment to heteronormative gender roles in “Gender”:

Woman                        Man
man love —                    — win title
— marry man                — love woman
— give birth                   — claim be (18)

More, this linguistic evidence also draws attention to the ever-increasing intertwinement of human and digital computer and the resulting displacement of the human as sole reader-writer now that the computer is also a reader-writer alongside (and often in collaboration with) the human.

As Herbelot rightly points out in the “Editor’s Foreword,” to a large extent this experimentation with the computer as reader-writer also comes out of early twentieth century, avant-garde writing that similarly sought to undermine, if not displace, the individual intending author. Dadaist Tristan Tzara, for instance, infamously wrote “TO MAKE A DADAIST POEM” in 1920 in which he advocates writing poetry by cutting out words from a newspaper article, randomly choosing these words from a hat, and then appropriating these randomly chosen words to create a poem by “an infinitely original author of charming sensibility.” Tzara was, of course, being typically Dadaist in his tongue-in-cheek attitude; but he was also, I believe, serious in his belief that the combination of appropriation and chance-generated methods of producing text could produce original writing that simultaneously undermined the egotism of the author. However, insofar as discourse.cpp comes out of a lineage of experimental writing invested in chance-generated writing and, later, in exploiting computer technology as the latest means by which to produce such writing, it also comes out of a certain tradition of disingenuousness that comes along with this lineage. No matter how much Tzara and later authors of computer-generated writing sought to remove the human-as-author, there was and still is no getting around the fact that humans are in fact deeply involved in the creation process – whether as cutters-and-pasters, computer programmers, inputters, or editors. The collection, then, is a much more complex amalgam than even Herbelot seems willing to acknowledge as discourse.cpp is evidence of the evenly distributed reading and writing that took place between Herbelot and the computer/program itself.

Media Studies and Writing Surfaces (introduction to Selected Fiction of John Riddell)

Below is the introduction that Derek Beaulieu and I wrote for Writing Surfaces: Selected Fiction of John Riddell that Wilfred Laurier University Press is generously publishing in April 2013. Please do pre-order a copy through your local independent bookstore. The collection is, I think, a perfect instance of literary experimentation with media archaeology.


Introduction: Media Studies and Writing Surfaces
Writing Surfaces: The Fiction of John Riddell brings an overview of the work of John Riddell to a 21st-century audience, an audience who will see this volume as a radical, literary manifestation of media archaeology. This book is also, in the words of the promotional material of Riddell’s 1977 Criss-cross: a Text Book of Modern Composition, a “long-over-due debut by one of our most striking new fictioneers.”

Since 1963 John Riddell’s work has appeared in such foundational literary journals as grOnk, Rampike, Open Letter and Descant as part of an on-going dialogue with Canadian literary radicality. Riddell was an early contributing editor to bpNichol’s Ganglia, a micro-press dedicated to the development of community-level publishing and the distribution of experimental poetries. This relationship continued to evolve with his co-founding of Phenomenon Press and Kontakte magazine with Richard Truhlar (1976) and his involvement with Underwhich Editions (founded in 1978): a “fusion of high production standards and top-quality literary innovation” which focused on “presenting, in diverse and appealing physical formats, new works by contemporary creators, focusing on formal invention and encompassing the expanded frontiers of literary endeavour.”

Writing Surfaces: The Fiction of John Riddell reflects Riddell’s participation in these Toronto-based, Marshall McLuhan-influenced, experimental poetry communities from the 1960s until roughly the mid- to late-1980s. These communities, and the work of contemporaries bpNichol, Paul Dutton, jwcurry, Richard Truhlar and Steve McCaffery, give context to Riddell’s literary practice and his focus on ”pataphysics, philosophically-investigative prose and process-driven visual fiction. While many of his colleagues were more renowned for their poetic and sound-based investigations, Riddell clearly shared both Nichol’s fondness for the doubleness of the visual-verbal pun and Steve McCaffery’s technical virtuosity and philosophical sophistication. In his magazine publications, small press ephemera, and trade publications, Riddell created a conversation between these two sets of poetics and extended it to the realm of fiction (exploring a truly hybrid form that is poetry as much as it is fiction). Riddell’s work as fiction works to explore the development and accretion of narrative in time-based sequence, a fiction of visuality and media. Writing Surfaces is the documentation of Riddell pushing his own writing to the very limit of what conceivably counts as writing through writing.

While it’s true that the title “writing surfaces” carries with it the doubling and reversibility of noun and verb, reminding us how the page is as much a flat canvas for visual expression as it is a container for thought, the first title we proposed for this collection was “Media Studies.” The latter, while admittedly too academic-sounding to describe writing as visually and conceptually alive as Riddell’s, could still describe Riddell’s entire oeuvre; the term not only refers to the study of everyday media (such as television, radio, the digital computer and so on) but it can—in fact should—encompass the study of textual media and the ways in which writing engages with how it is shaped and defined by mediating technologies. In other words, Riddell’s work is a kind of textbook for the study of media through writing, or, the writing of writing.

The best-known example of Riddell’s writing of writing is “Pope Leo, El ELoPE: A Tragedy in Four Letters,” initially published in April 1969 with mimeograph illustrations by bpNichol through Nichol’s small but influential Canadian magazine grOnk. It was published again, with more refined, hand-drawn, illustrations, once again by Nichol, in the Governor General’s Award winning anthology Cosmic Chef: An Evening of Concrete (1970, the version included here) and in a further iteration in Criss-Cross: A Text Book of Modern Composition with illustrations by Filipino-Canadian comic book artist Franc Reyes (who would later pencil and ink Tarzan, House of Mystery and Weird War for dc comics and was involved with 1970s underground Canadian comix publisher Andromeda). “Pope Leo” relates a stripped-down comic-strip tale of the tragic murder of Pope Leo; the narrative unfolds partly by way of frames within frames, windows within windows, telling a minimalist story in which the comic-strip frame is nothing but a simple hand-drawn square with the remarkable power to bring a story into being. The anagrammatic text is an exploration of the language possibilities inherent in letters ‘p,’ ‘o,’ ‘l,’ and ‘e’ (hence the sub-title, “a tragedy in four letters”)—sometimes using one of the letters twice, sometimes dropping one, always rearranging, always moving back and forth along the spectrum of sense/nonsense: “O POPE LEO! PEOPLE POLL PEOPLE! PEOPLE POLE PEOPLE! LO PEOPLE.”

With a/z does it (1988), Riddell’s writing of writing focuses even more on the investigation of the possibilities of story that lie well beyond the form of the sentence, paragraph, the narrative arc. Rather than playing with the visual story structure of the frame and the verbal structure of the anagram as means by which to create a narrative, with pieces like “placid/special” Riddell first creates grid-like structures of text with the mono-spaced typewriter font and then uses a photocopier to document the movement of the text in waves across the glass bed. The resultant text is the visual equivalent of his earlier fine-tuned probing of the line between sense and nonsense in “Pope Leo.” These typewriter/photocopier pieces record both signal and noise as columns of text waver in and out of legibility. Semantically, these mirage-like texts focus on the words ‘placid’ (the lines of text reminding us of the symmetrical reversibility of ‘p’ and ‘d’ which begin and end the word), ‘love’ (with just the slightest suggestion of ‘velo’ at the beginning and end of each wave), ‘first,’ ‘i met,’ ‘special,’ ‘evening’ and ‘light’ (appearing as a hazy sunset moving down the page), and conclude with ‘relax’ and ‘enjoy.’ The paratactical juxtaposition of the two pages in “placid/special” creates the barest suggestion of a narrative about lovers enjoying an evening together while at the same time each page is in itself an even more minimalist story told through experiments with the manipulation of writing media.

Riddell’s writing of writing that is simultaneously sense and nonsense, verbal and visual, self-contained and serial—that demands to be read at the same time as it ought to be viewed—nearly reaches its zenith in later work such as E clips E (1989). In particular, “surveys” is writing only in the most technical sense with its Jackson Pollock-like paint drippings and scattered individual letters, all counter-balanced by neat, hand-drawn frames.

Just as Riddell’s compositions challenge how writers and readers form meaning, the original publications of many of the selections in Writing Surfaces, and Riddell’s larger oeuvre, were also physically constructed in a way that would demand reader participation. Riddell’s original publications include small press leaflets (Pope Leo, El ELoPE: A Tragedy in Four Letters), business card-sized broadsides (“spring”), chapbooks (A Hole in the Head and Traces) and pamphlets (How to Grow Your Own Light Bulbs). His work also extends into books as non-books: posters which double as dart boards (1987’s d’Art Board), novels arranged as packages of cigarettes (1996’s Smokes: a novel mystery) and decks of cards to be shuffled, played and processually read (1981’s War (Words at Roar), Vol.1: s/word/s games and others). Inside books with otherwise traditional appearances Riddell insists that his readers reject passive reception of writing in favour of a more active role. While outside of the purview of Writing Surfaces, 1996’s How to Grow Your Own Light Bulbs includes texts that must be excised and re-assembled (“Peace Puzzle”); burnt with a match (“Burnout!”); and written by the reader (“Nightmare Hotel”). Copies of the second edition of Riddell’s chapbook TRACES (1991) include a piece of mirrored foil to read the otherwise illegible text.

Riddell’s compositions do not just question the traditional role of the author; they attempt to annihate it. With “a shredded text” (1989) Riddell fed an original poem into a shredder, which then read the text and excreted (as writing) the waste material of that consumption. The act of machinistic consumption creates a new poem—the original was simply the material for the creation and documentation of the final piece. With “a shredded text” Riddell acts as editor to restrict the amount of waste that enters the manuscript of the book. The machine-author becomes a reader and writer of excess and non-meaning-based texts while the human-author becomes the voice of restraint and reason attempting to limit the presentation of continuous waste-production as writing. If, as Barthes argues, “to read […] is a labour of language. To read is to find meanings,” then the consumption and expulsion of texts by machines such as photocopiers and shredders produces meanings where meanings are not expected by fracturing the text at the level of creation and consumption—an act which is simultaneously both readerly and writerly.

Riddell’s oeuvre is almost entirely out of print and unavailable except on the rare book market. Working within the purview of 1970s and 1980s Canadian small presses means that Riddell’s writing proves elusive to a generation of readers who have come of literary age after the demise of such once-vital publishers such as Aya Press (which was renamed The Mercury Press in 1990 and has also ceased publishing), Underwhich Editions, Ganglia, grOnk and the original Coach House Press. As obscure as his original books may be, Riddell’s work remains a captivating example of hypothetical prose; dreamt narratives that have sprouted from our abandoned machines. With no words and no semantic content, we are left to read only the process of writing made product—a textbook of compositional method using writing media from the pen/pencil, the sheet of paper, the typewriter, the shredder, photocopier, to even the paintbrush. The medium is the message.

“Reading Writing Interfaces” Book Project Description

Reading Writing Interfaces: From the Digital to the Bookbound
(forthcoming University of Minnesota Press, 2014)

Reading Writing Interfaces: From the Digital to the Bookbound

Reading Writing Interfaces: From the Digital to the Bookbound

Table of Contents:

Chapter 1: Indistinguishable From Magic | Invisible Interfaces and Digital Literature as Demystifier

1.0 Introduction | Invisible, Imperceptible, Inoperable
1.1 Natural, Organic, Invisible
1.2 The iPad | “a truly magical and revolutionary product”
1.3 From Videoplace to iOS | A Brief History of Creativity through Multitouch
1.4 iPoems
1.5 Making the Invisible Visible | Hacking, Glitch, Defamiliarization in Digital Literature

Chapter 2: From the Philosophy of the Open to the Ideology of the User-Friendly

2.0 Introduction | Digging to Denaturalize
2.1 Open, Extensible, Flexible | NLS, Logo, Smalltalk
2.2 Writing as Tinkering | The Apple II and bpNichol, Geof Huth, Paul Zelevansky
2.3 Closed, Transparent, Task-oriented | The Apple Macintosh

Chapter 3: Typewriter Concrete Poetry and Activist Media Poetics

3.0 Introduction | Analog Hacktivism
3.1 The Poetics of a McLuhanesque Media Archaeology
3.2 Literary D.I.Y. and Concrete Poetry
3.3 From Clean to Dirty Concrete
3.4 bpNichol, Dom Sylvester Houédard, Steve McCaffery

Chapter 4: The Fascicle as Process and Product

4.0 Introduction | Against a Receding Present
4.1 My Digital Dickinson
4.2 The Digital/Dickinson Poem as Antidote to the Interface-Free
4.3 The Digital/Dickinson Poem as Thinkertoy

Chapter 5: Postscript | The Googlization of Literature

5.0 Introduction | Readingwriting
5.1 Computer-generated Writing and the Neutrality of the Machine
5.2 “And so they came to inhabit the realm of the very unimaginary”

Works Cited

Just as the increasing ubiquity and significance of digital media have provoked us to revisit the book as a technology, they have introduced concepts that, retroactively, we can productively apply to older media. Interface, a digital-born concept, is such an example. Reading Writing Interfaces: From the Bookbound to the Digital probes how interfaces have acted as a defining threshold between reader/writer and writing itself across several key techno-literary contexts. As I outline in the chapter summaries below, my book describes, largely through original archival research, ruptures in present and past media environments that expose how certain literary engagements with screen- and print-based technologies transform reading/writing practices. To borrow from Jussi Parikka’s What Is Meda Archaeology? (2012), my book “thinks” media archaeologically as its analyses undulate from present to past media environments. More specifically, I lay bare the way in which poets in particular – from the contemporary Jason Nelson and Judd Morrissey back to Emily Dickinson – work with and against interfaces across various media to undermine the assumed transparency of conventional reading and writing practices. My book, then, is a crucial contribution to the fields of media studies/digital humanities and poetry/poetics in its development of a media poetics which frames literary production as ineluctably involved in a critical engagement with the limits and possibilities of writing media.

My book works back through media history, probing poetry’s response to crucial moments in the development of digital and analog interfaces. That is, the book chapters move from the present moment to the past, each also using a particular historical moment to understand the present: Reading Writing Interfaces begins with digital poetry’s challenge to the alleged invisibility of multitouch in the early 21st century, moves to poets’ engagement with the transition from the late 1960s’ emphasis on openness and creativity in computing to the 1980s’ ideology of the user-friendly Graphical User Interface, to poetic experiments with the strictures of the typewriter in the 1960s and 1970s, and finally to Emily Dickinson’s use of the fascicle as a way to challenge the coherence of the book in the mid to late 19th century. Thus, throughout, I demonstrate how a certain thread of experimental poetry has always been engaged with questioning the media by which it is made and through which it is consumed. At each point in this non-linear history, I describe how this lineage of poetry undermines the prevailing philosophies of particular media ecology and so reveals to us, in our present moment, the creative limits and possibilities built into our contemporary technologies. By the time I return once again to the present moment in the post-script via the foregoing four techno-literary ruptures, I have made visible a longstanding conflict between those who would deny us access to fundamental tools of creative production and those who work to undermine these foreclosures on creativity. In many ways, then, my book reveals the strong political engagement driving a tradition of experimental poetry and argues for poetry’s importance in the digital age.

The underlying methodology of Reading Writing Interfaces is the burgeoning field of media archaeology. Media archaeology does not seek to reveal the present as an inevitable consequence of the past but instead looks to describe it as one possibility generated out of a heterogeneous past. Also at the heart of media archaeology is an on-going struggle to keep alive what Siegfried Zielinski calls “variantology” – the discovery of “individual variations” in the use or abuse of media, especially those variations that defy the ever-increasing trend toward “standardization and uniformity among the competing electronic and digital technologies.” Following Zielinski, I uncover a non-linear and non-teleological series of media phenomena – or ruptures – as a way to avoid reinstating a model of media history that tends toward narratives of progress and generally ignores neglected, failed, or dead media. That said, following on the debates in the field of digital humanities about the connection of theory and praxis (the so-called “more hack, less yack” debate) my book is more about doing than theorizing media archaeology; it considers these ruptures at the intersection of key writing technologies and responses by poets whose practice is at the limit of these technologies. Crucially, no books on or identified with media archaeology have engaged thoroughly with the literary and none have consistently engaged with poetry in particular; thus my book is also an innovation in the field in that it uses this methodology to read poetry by way of interface.

Chapter Summaries:
One of the most recent and well-known unveilings of an “interface-free interface” came in 2006 when research scientist Jeff Han introduced a 36-inch wide computing screen which allows the user to perform almost any computer-driven operation through multi-touch sensing. Han describes this interface as “completely intuitive . . . there’s no instruction manual, the interface just sort of disappears.” However, the interface does not disappear but rather, through a sleight-of-hand, deceives the user into believing there is no interface at all. I use this anecdote to open the introduction to Reading Writing Interfaces, first, as a way to illustrate the current trend in interface design which emphasizes usability at the expense of providing access to the underlying workings of interfaces, which in turn defines the limits and possibilities of creative expression. And second, I use the anecdote to begin a theoretical and historical overview of the notion of interface, particularly as it has played out in the computing industry in the last forty years. The definition of ‘interface’ I settle on throughout my book is one I adopt from Alexander Galloway to mean a technology, whether book- or screen-based, that acts as a threshold between reader and writing that also subtly delimits both the reading and writing process. This nuanced and yet expansive definition makes way for an acknowledgement of the decisive back-and-forth play that occurs between human and machine and it also broadens our conventional notions of interface to include a range of writing interfaces such as the command-line, the typewriter, or even the fascicle. In light of Reading Writing Interfaces‘ dual attention to media studies and poetry/poetics, I close the introduction with discussions of these two fields as they influence this project. I situate the book within media archaeology, which I take as my methodology, and explain how its emphasis on a non-teleological unearthing of uses/abuses of media allows me to proceed through my media history in reverse chronological order as I uncover media ruptures from the present through to the past. Finally, I conclude the introduction by pairing media archaeology with the notion of ‘media poetics’ as a way to account for poets’ activist engagement with the creative limits and possibilities of media.

The first chapter, titled “Indisinguishable From Magic: Invisible Interfaces and their Demystification,” thus begins with the present moment. Here I argue that contemporary writers such as Young-Hae Chang, Judd Morrissey, Jason Nelson, and Jörg Piringer advance a 21st century media poetics by producing digital poems which are deliberately difficult to navigate or whose interfaces are anything but user-friendly. For example, Morrissey and Nelson create interfaces that frustrate us because they seek to defamiliarize the interfaces we no longer notice; it is a literary strategy akin to Viktor Schklovksy’s early twentieth century invocation of ‘defamiliarization’ to describe the purpose of poetic language – except here it is deployed to force us to re-see interfaces of the present. I argue it is precisely against a troubling move toward invisibility in digital computing interfaces that Judd Morrissey has created texts such as “The Jew’s Daughter” – a work in which readers are invited to click on hyperlinks embedded in the narrative text, links which do not lead anywhere so much as they unpredictably change some portion of the text before our eyes. The result of our attempts to navigate such a frustrating interface, structured as it is by hyperlinks we believe ought to lead us somewhere, is that the interface of the Web come into view once again. Likewise working against the clean, supposedly transparent interface of the Web, in “game, game, game and again game” Jason Nelson creates a game-poem in which he self-consciously embraces a hand-drawn, hand-written aesthetic while deliberately undoing poetic and videogame conventions through a nonsensical point-system and mechanisms that ensure the player neither accumulates points nor “wins.” At the heart, then, of the most provocative digital poems lies a thoroughgoing engagement with difficulty or even failure. By hacking, breaking, or simply making access to interfaces trying, these writers work against the ways in which these interfaces are becoming increasingly invisible even while these same interfaces also increasingly define what and how we read/write. In this chapter I also pay particular attention to how writers such as Jörg Piringer are creating poetry “apps” which work against the grain of the multitouch interface that has been popularized by Apple’s iPad – a device that perfectly exemplifies the ways in which the interface-free interface places restrictions on creative expression in the name of an ideology, more than a philosophy, of the user-friendly.

The second chapter, “From the Philosophy of the Open to the Ideology of the User-Friendly,” uncovers the shift from the late 1960s to the early 1980s that made way for those very interfaces I discuss in chapter one which are touted as utterly invisible. Based on original archival research I undertook of historically important computing magazines such as Byte, Computer, and Macworld as well as handbooks published by Apple Inc. and Xerox, I bring to light the philosophies driving debates in the tech industry about interface and the consequences of the move from the command-line interface in the early 1980s to the first mainstream windows-based interface introduced by Apple in the mid-1980s. I argue that the move from a philosophy of computing based on a belief in the importance of open and extensible hardware to the broad adoption of the supposedly user-friendly Graphical User Interface, or the use of a keyboard/screen/mouse in conjunction with windows, fundamentally changed the computing landscape and inaugurated an era in which users have little or no comprehension of the digital computer as a medium. Thus, media poetics prior to the release of the Apple Macintosh in 1984 mostly takes the form of experimentation with computers such as the Apple IIe that at the time were new to writers. Digital poetry from the early 1980s by bpNichol, Geof Huth, and Paul Zelevansky does not work to make the command-line or Apple IIe interface visible so much as it openly plays with and tentatively tests the parameters of the personal computer as a still-new writing technology. This kind of open experimentation almost entirely disappeared for a number of years as Apple Macintosh’s design innovations and their marketing made open computer architecture and the command-line interface obsolete and GUIs pervasive.

In the third chapter, “Typewriter Concrete Poetry and Activist Media Poetics,” I delve into the era from the early 1960s to the mid-1970s in which poets, working heavily under the influence of Marshall McLuhan and before the widespread adoption of the personal computer, sought to create concrete poetry as a way to experiment with the limits and possibilities of the typewriter. These poems – particularly those by the Canadian writers bpNichol and Steve McCaffery and the English Benedictine monk Dom Sylvester Houédard – often deliberately court the media noise of the typewriter as a way to draw attention to the typewriter-as-interface. As such, when Andrew Lloyd writes in the 1972 collection Typewriter Poems that “a typewriter is a poem. A poem is not a typewriter,” he gestures to the ways in which poets enact a media-analysis of the typewriter via writing as they cleverly undo stereotypical assumptions about the typewriter itself: a poem written on a typewriter is not merely a series of words delivered via a mechanical writing device and, for that matter, neither is the typewriter merely a mechanical writing device. Instead, these poems express and enact a poetics of the remarkably varied material specificities of the typewriter as a particular kind of mechanical writing interface that necessarily inflects both how and what one writes. Further, since they are about their making as much as they are about their reading/viewing, if we read these concrete poems in relation to Marshall McLuhan’s unique pairing of literary studies with media studies – a pairing which is also his unique contribution to media archaeology avant la lettre – we can again reimagine formally experimental poetry and poetics as engaged with media studies and even with hacking reading/writing interfaces. Further, this chapter also draws on archival research to uncover not only the influence of McLuhan on concrete poetry but – for the first time – to delineate concrete poetry’s influence on those writings by McLuhan that are now foundational to media studies.

In the fourth chapter, “The Fascicle as Process and Product,” I read digital poems into and out of Emily Dickinson’s use of the fascicle; I assert the fascicle is a writing interface that is both process and product from a past that is becoming ever more distant the more enmeshed in the digital we become and the more the book becomes a fetishized object. Otherwise put, her fascicles, as much as the later-twentieth century digital computers and the mid-twentieth century typewriters I discuss in chapters two and three, are now slowly but surely revealing themselves as a kind of interface that defines the nature of reading as much as writing. More, extending certain tenets of media archaeology I touch on above, I read the digital into and out of Dickinson’s fascicles as a way to enrich our understanding of her work. Such a reading is a self-conscious exploitation of the terminology and theoretical framing of the present moment which – given the ubiquity of terms that describe digital culture such as ‘interface,’ ‘network,’ ‘link,’ etc. or even of such now commonly understood terms such as ‘bookmark’ and ‘archive’ which previously were only used by the bookish or the literary scholar – is so steeped in the digital and which, often without our knowing, saturates our language and habits of thought.

Finally, in chapter five, the postscript to Reading Writing Interfaces, “The Googlization of Literature,” I focus on the interface of the search engine, particularly Google’s, to describe one of conceptual writing’s unique contributions to contemporary poetry/poetics and media studies. Building on the 20th century’s computer-generated texts, conceptual writing gives us a poetics perfectly appropriate for our current cultural moment in that it implicitly acknowledges we are living not just in an era of the search engine algorithm but in an era of what Siva Vaidhyanathan calls “The Googlization of Everything.” When we search for data on the Web we are no longer “searching” – instead, we are “Googling.” But conceptual writers such as Bill Kennedy, Darren Wershler, and Tan Lin who experiment with/on Google are not simply pointing to its ubiquity – they are also implicitly questioning how it works, how it generates the results it does, and so how it sells ourselves back to us. Such writing is an acknowledgement of the materiality of language in the digital that goes deeper than a recognition of the material size, shape, sound, texture of letters and words that characterizes much of twentieth-century bookbound, experimental poetry practices. These writers take us beyond the 20th century avant garde’s interest in the verbal/vocal/visual aspect of materiality to urge us instead to attend to the materiality of 21st century digital language production. They ask, what happens when we appropriate the role of Google for our own purposes rather than Google’s? What happens when we wrest Google from itself and instead use it not only to find out things about us as a culture but to find out what Google is finding out about us? “The Googlization of Literature,” then, concludes Reading Writing Interfaces by providing an even more wide-ranging sense of poetry’s response to the interface-free.

Laughing Gland (pdfs)

After studying and writing poetry under the guidance of Douglas Barbour at the University of Alberta, in the late 1990s I made my way to the University of Victoria to do an M.A. with Barbour’s long-time sound poetry collaborator Stephen Scobie. I went there to write a thesis on bpNichol and sound poetry and, in my last year there, I decided to publish a small, free literary journal which included sound poetry scores, visual poetry, drama, and criticism.

I’ve only recently realized that there are some real gems in the two issues of Laughing Gland that I published and so I’ve decided to make a pdf of these two issues available here on my blog – download issue 1 here and download issue 2 here.

Issue 1, published in Fall 1999, includes a sound poetry score by Scobie and Barbour called “THE SUKNASKI VARIATIONS: a performance piece for Re: Sounding”:

In all fairness to any readers out there, I should also say that this issue includes some graduate student essays—

Issue two, published in Spring 2000, not only includes a handful of early concrete poems by the very accomplished visual writer Derek Beaulieu and the poet/publisher rob mclennan, but it also includes a previously unpublished sound poetry score by bpNichol titled “BUIK: Glasgow Dialectics” from October 1982:

And hopefully you’ll get a good chuckle out of the other graduate student essays in this issue as well – enjoy!