What and Where is the Interface in Virtual Reality? An Interview with Illya Szilak on Queerskins

It’s been four years since the publication of Reading Writing Interfaces (University of Minnesota Press 2014) and admittedly, to my ears and eyes, the first chapter on gestural and multitouch interfaces already seems outdated – at least outdated in terms of specific tech if not in terms of the general principles. If I were going to write about interfaces that are dominating or seeking to dominate the consumer and creative markets in 2018, instead of gestural or multitouch devices I would surely have to write about Voice User Interfaces (VUIs) such as Amazon Echo, Google Home or Apple HomePod – those seemingly interface-less, inert boxes of computing power (for some reason, always coded as women – harkening back to a mid-20th century notion of the female secretary as self-effacing, perpetually amiable, always helpful, always eager to please) embedded throughout homes that just sit there waiting to respond to any voice command and appearing less as a computer and more as an artificially intelligent in-house butler.

But I would also have to write about the growing inevitability of Virtual Reality systems such as Occulus Rift, Sony PlayStation VR, HTC Vive, Google Daydream View, Samsung Gear VR, and I’m sure many more. For me, the problem with accounting for the interface of VR is more difficult to understand than VUI devices, partly because VR poses two problems: for one, the interface that dominates most of the waking hours of the creator is entirely different from that of the user (god help the VR designer who has to design a VR environment in Unity with a headset on); the second problem is that, on the user side, it’s as if the interface has been displaced, moved to the side, outside of your vision and your touch, at the same time as you’re now meant to believe you’re inside the interface – as if your head is not in front of the screen but rather inside it. There is still a screen (even though it’s now inside your headset) and there is still a keyboard and mouse (you have to have a PC to use a device such as Occulus Rift so both the keyboard and mouse are presumably somewhere in the room with you as you play) but all these key components of the Keyboard-Screen-Mouse interface have been physically separated and then added on to with hand-controllers. The way in which the interface for VR users has been physically removed to the peripheries of the room in which the user is stationed is, I think, quite significant. If an interface is the threshold between human and computer, any change in either side of the threshold or the threshold itself (whether it’s a change in functionality or in physical location) is bound to have a profound effect on the human. In the case of VR, the physical changes in the location of the interfaces alone are enough to fundamentally change the human user’s experience as they are now standing up and mobile to an unprecedented degree at the same time as this mobility has nothing to do with exploring the affordances of the interfaces themselves or their affordances – the mobility is entirely in the service of exploring a pre-determined, usually carefully controlled virtual environment.

One of the recurring issues I raise in Reading Writing Interfaces is that of invisibility – especially the danger in interfaces designed to either disappear from view or distract us from the fact that we have no understanding and no access to how the interface is shaping and determining what and how we know, what and how we create. As I wrote in the introduction, “Despite our best efforts to literally and figuratively bring these invisible interfaces back into view, either because we are so enmeshed in these media or because the very definition of ideology is that which we are not aware of, at best we may only partly see the shape of contemporary computing devices.” However, as I argued in my book and as I still believe, literature and the arts are built to take on the work of demystifying these devices and these interfaces and making both visible once again. Thus, while I think the fundamental problem with VR as I describe it above is that users are becoming even more estranged, even more alienated from whatever lies behind the glossy digital interface (in fact, now the estrangement is both literal and figurative as the computer producing the VR experience is potentially as much as twelve feet away), I have already noticed that writers and artists are taking this challenge on. This is precisely why I wanted to interview Illya Szilak so much on the work of interactive cinema, “Queerskins,” she’s creating with Cyril Tsiboulski for the Occulus Rift. Szilak is a long-time active participant in the digital literature community as both a writer/creator and a critic (writing for the Huffington Post). Her transmedia novel Reconstructing Mayakovsky was included in the second volume of the Electronic Literature Collection. Her latest piece, Queerskins, is described on the Kickstarter page for the project as “a groundbreaking interactive LGBTQ centered drama that combines cutting-edge tech with intimate, lyrical storytelling.” Below are my questions and her answers – enjoy!

*

In the process of writing and creating Queerskins with Cyril Tsiboulski, where do you locate the interface in virtual reality systems such as the Occulus Rift? Is the interface different for you as writer/creator than it is for the reader/user?
The interface for us is between reality and virtuality. The hardware of VR is what allows us to navigate that. Of course, books, too, do that, but, without going into a scientific or philosophical discussion of “presence,” suffice it to say, that the experience of being in another world is a primordial mechanism for organismal survival that relies on a motor map of interactivity. Reading may create a conceptual version of this, but your body does not experience it in the same way. For me it relates theoretically to Marinetti’s total theater especially his manifesto on Tactilism in which he says: “The identification of five senses is arbitrary, and the identification of five senses is arbitrary, and one day we will certainly discover and catalogue numerous other senses. Tactilism will contribute to this discovery.”

I think machine extended bodies have the potential to learn new forms of sensing “reading” the world and VR will certainly be used for this transcendence. We are also interested in producing a kind of Brechtian estrangement, so that this transcendence is always in dialogue with the realities of embodiment; even VR hardware requires a body to perceive.

In Queerskins, we are using a variety of techniques to manufacture an aesthetic and narrative sweet spot between reality and artifice. And, we use a variety of technologies and techniques to do this. It was important not to make this into a seamless whole but to leave spaces between so that the user is able to attain some amount of critical distance; so we are combining a 3D modeled car interior created from photogrammetry, 360 video landscapes, 3D scanned and CGI objects and animations and 3D volumetric video live action.

As with all our narratives, we want to explore the tension between material, historical, embodied realities and virtual realities which includes, in the case of VR, not just 3D immersive environments but also, as with our online narratives, harnessing user imagination and memory. For me, ethics are linked to the material and historical. The lived realities (this time–this place) of LGBTQ people can’t be wished away. So, it was really important to use historically accurate objects. Almost every object in the box and many of the sounds are archival – bought off  eBay and 3D scanned or, in the case of sound, recorded by others in different environments or found on the Internet Archive.

At the same time, we recognize the perhaps quintessentially human desire for transcendence: through love, technology, sex, religion, writing, art, imagination, memory and storytelling itself. So, we love the incredible possibilities that VR generates in that respect (we have a plan for three more episodes for queerskins: a love story and transcendence becomes a more and more complicated and innovative part of this though always connected back to material/ historical realities). In this episode, we are more interested in the gravity of the experience. A young man has died of AIDS, essentially abandoned by his parents. So, no, you don’t get to fly or move mountains. You will be allowed two transcendent moments which we are calling memory spaces. In them, there is a change of place and time. In the first, you can get up and walk through a cathedral, but all you will find are just the sounds and images of Sebastian’s everyday life – in other words, memories.

These moments of transcendence are differentiated from the emotionally wrenching reality of the car ride both aesthetically and through the user’s sense of space and  agency. These latter elements are, of course, key narrative devices in VR. VR is a spatial medium, I think that in this medium is more so than film; it was important for us to create a situation that could be read “body to body” because we wanted to hook into the old brain of motor neurons to create emotional responses to the environment. For the most part, the visitor (that’s what they call user or reader now) is stuck in the car with no way to move (in fact, we will seat belt them into the chair which we had created for filming the actors in front of green screen) behind the two actors. So, we actually worked with theater director, choreographer and Butoh dancer, Dawn Saito, to choreograph the two actors’ gestures in a kind of missed call and response.

Again, staying with gravity and materiality and mortality, and to maintain immersion – because we need the user to be absolutely present for the emotional bloodletting happening in close proximity in the front seat, it was important that users not be distracted by having to learn new actions and that their interactions be of the everyday variety; so you can pick up things, you can turn the pages of a diary, you can walk.  (In later episodes you will be able to play music on a virtual lover’s body or walk on the ceiling and have kaleidoscopic housefly-vision, and make the statues of a church come to life…) But for this one, well, hey, you are walking and moving and doing and alive and, that, at least, is better than dead. Limiting user agency here is purposeful. We want you to feel the loss: you can not speak, nor write, nor communicate in any way, just as Mary-Helen can not tell her son that she is sorry or that she loved him.

We are really interested in sound because not only is it spatial, it is can be used to harness the user’s imagination – very old school transcendence and also very much associated with older technologies like radio and text. So, Queerskins starts out with credits and sound. (Skywalker Studio is doing sound post-production and audio design for us.) You begin by imagining the story. This cues users subtly to their role. You are the co-creator. You get pieces of information and have to put together the story and most importantly you come up with an  idea of who the man who died was and the life he lead. When you finally hear his voice speaking his own intimate diary entries, he may or may not be what you thought he would be. The diary is in a sense the missing body. You will find it on one side of you on a pew in the cathedral memory space. On the other side is Sebastian’s empty funeral suit. (Which is the queerskin? )

Can you describe a few creative possibilities opened up by the software/hardware you’re using?
Depthkit was used for filming the actors – it’s basically software that processes data from a high end DSLR video camera to create a texture map and a Kinect which creates a volume map and fuses these to create live action 3D volumetric video. It allows us to have actors in a CGI environment which gives the user a sense of “being there” much more than 360 video does. The 360 video outside the car has a flatness, a nostalgic aesthetic that we actively sought (we flew to Missouri and drove around rural areas ten hours a day with a stranger I met on FB) after we did initial experiments. It looks like old rear screen projection or like you are traveling through a home movie. It’s “real” in the sense that documentary video is “real” but you don’t feel like you are really in the space with your body.

Can you describe a few limitations you’ve faced by the same software/hardware? How has it shaped or determined what you’re able to create?
We spent a lot of time and money making sure we could actually get the actor footage into the CGI car in a way that looked realistic. That being said, it is not perfect– there is flare around the edges and the kinect reads everything in the same plane so we had to make sure that actors didn’t cross over into each other. As iI said–we were already choreographing the actors, so this just became part of the choreography. The 360 video shakes because we shot from a car –we are removing that but will also be using haptics–a motor in the seat the visitor sits on, to make the user’s body feel like it is shaking. In VR, a lot of this comes down to $. There are alternatives which would cost a lot more. But, then, for us, part of this is working aesthetically around the limitations. Also, optimization in the game engine is an issue, we have had problems with frame rate drops that puts our audio out of sink with video. These projects are so complicated that sometimes you don’t know what exactly is doing this. However, frame rate drops are not an opportunity for experimentation like some other hardware limitations. THat is a failure. These are limits of aesthetic expectations and our hardwired senses–we can’t really play with that in this piece. So, Cyril has had to play with the script a lot to decrease frame rate drops.

How, if at all, do you want your reader/user to be aware of the VR interface?
We had to wrestle with whether to let the user get up and walk because this would certainly disrupt the cohesiveness of the experience (the user might need some prompting and direction and will need to be led back physically to the seat) but, in the end, we decided the agency and sense of freedom this afforded (a refuge of sorts) was worth it. Moreover, this episode like all planned episodes is part of an interactive physical installation. They can act separately, but together provide an richer experience. The installation for this episode is a performance art “game” that we will install with the VR piece. One other thing of interest: we are hoping to use Leap Motion for the haptics, not Oculus Touch. Leap Motion is controller-less – Cyril saw it in a Laurie Anderson piece at Mass MOCA and we are working with the developers. It is incredibly natural feeling as your hands appear virtually. Leap Motion recognizes gestures, so the interface in this case really disappears but for non-gamers it means there is no learning of controllers; for us this is optimal especially given a film audience at first.

Advertisements

Judy Malloy donations to the MAL’s early e-literature collection

malloyDonations

It’s an honor indeed to announce that Judy Malloy, a true pioneer of hypertext and electronic literature broadly, has donated a set of floppies as well as documentation to the Media Archaeology Lab. To give you a sense of her contributions to the field, I’ve excerpted the following from her longer, more fascinating biography, on her website:

Her work as a pioneer on the Internet and in electronic literature began after cataloguing, designing and programming information systems in the late mid and late sixties, at the time when library information systems designers were among the first to utilize computers to access information, and futurists were envisioning their use in the humanities. She began creatively using narrative information in artists books in the late seventies and early eighties and then, with a vision of nonsequential literature, wrote and programmed Uncle Roger — one of the first (if not the first) works of hypertext literature — on Art Com Electronic Network in the Well. (1986-1988) In the following years, she created a series of innovative literary works that run on computer platforms and were published by Eastgate and on the Internet. In 1993, she was invited to Xerox PARC where she worked in CSL (Computer Science Laboratory) as the first artist in their artist-in-residence program. Judy Malloy created one of the first arts websites, Making Art Online, (1993-1994) originally commissioned in collaboration with the ANIMA site in Vancouver (CSIR/Western Front) and currently hosted on the website of the Walker Art Center. l0ve0ne, written and coded in 1994, was the first selection in the Eastgate Web Workshop. A complete collection of her papers and software is archived in the Judy Malloy Papers at the David M. Rubenstein Rare Book & Manuscript Library at Duke University.

Below is Malloy’s packing list of the works she has generously donated to the lab – I will soon test all the floppies and will add notes here as to their functionality. Enjoy and, as always, the MAL welcomes visiting researchers!

*

Disk labeled “molasses”
Malloy’s 1988 Hypercard Stack Molasses.

Judy Malloy, Molasses, Berkeley, CA, 1988. (for MacIntosh Computers HyperCard – produced at the Whole Earth Review under sponsorship of Apple Computers) – Exhibited in the traveling exhibition Art Com Software at Tisch School of the Arts, New York University, NYC, NY, 1988 and other places.

Judy Malloy, its name was Penelope, 1990.
This is probably a PC disk and an interim version between the 1989 exhibition version and the more formally packaged 1991 version, which was distributed by Art Com software.

Judy Malloy, its name was Penelope. Eastgate Systems, 1993
This was Eastgate’s first version, published on disk for both Macs and PCs.  The disk is signed and actually says 1992.  This copy was my Mother’s copy which is why there is a label that says Barbara Powers in it. Note that the pages in these early editions stuck together

Judy Malloy, Wasting Time, Penelope, Uncle Roger
It looks as if all three of these works are on the disk.  It was probably a disk I used to send around the works for exhibition consideration and is probably a PC disk.  Wasting Time was published as follows: Judy Malloy, “Wasting Time”, A Narrative Data Structure”, After the Book (Perforations 3) Summer, 1992.

Judy Malloy and Cathy Marshall, Forward Anywhere  Eastgate Systems, 1996.
This is a disk version.  It was published in both Mac and PC versions, but this is probably a PC version. A second version was published with a CD

James Johnson, Second Thoughts, 1989.
Distributed by Art Com Software. He sent me a couple of copies, and I gave the other one to my archives at Duke.

Documentation  Folders

Bad Information Base #1
This is the first work of computer-mediated text that I created.  Note that it is not the Bad Information Base #2 which was created ion ACEN later in 1986. Bad Information Base #1 is documented in Judy Malloy, “OK Research/OK Genetic Engineering/Bad Information, Information Art Defines Technology”, Leonardo 21(4): 371 – 375, 1988  It is explained in the May 1986 documentation in the folder. Basically, I made the database and then sent out cards to the mail art network.  When the cards were returned, I ran a search and then sent a printout to the requester. In addition to a documentation sheet, the folder includes a blank search card, an envelope label (it was pasted on to the envelopes) a second edition envelope, a blank letterhead sheet,  and a copy of the accordion fold list of keywords that was sent along with the card. I don’t have a disk of this work available, but Duke has printouts and a notebook with copies of the completed search cards.

Uncle Roger
A documentation sheet for A Party in Woodside, 1987

This was probably included with the 1987 version of A Party in Woodside which was self published and distributed by Art Com

An instruction booklet that was included in the packaging to the Apple II version of Uncle Roger which contained all three files. This version was probably published (self published by Bad Information) in 1988 and was distributed by Art Com.

Its name was Penelope
Documentation for the exhibition version.

A flyer advertising the version for the self-published (Narrabase Press) version  that was available from Art Com.

Unassembled packing for the Narrabase Press version. The 3 pieces inside the watercolor paper folder are a cover, a back cover page and instructions. These pieces were pasted onto folder watercolor paper and a pocket that I constructed inside the folded watercolor paper contained a disk. An unassembled disk cover is also included.  The whole when assembled was enclosed in a heavy clear plastic sleeve.

Molasses
This folder contains a few Xeroxes or printouts of screens from Molasses, one of which has instructions for reading the work.

Wasting Time
A documentation sheet for Wasting Time.

From the Philosophy of the Open to the Ideology of the User-Friendly

Below is an excerpt from chapter two, “From the Philosophy of the Open to the Ideology of the User-Friendly,” from my book Reading Writing Interfaces: From the Digital to the Bookbound (University of Minnesota Press 2014). It is also the basis of the talk I gave at MLA in January 2013 and the full version of the talk I gave at Counterpath Press February 2013. As always, I welcome your comments!

*

“Knowledge is power: information is the fabric of knowledge; the controller of information wields power.”
–“Some Laws of Personal Computing,”
Byte 1979 (Lewis 191)

“If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual…Any barrier that exists between the user and some part of the system will eventually be a barrier to creative expression. Any part of the system that cannot be changed or that is not sufficiently general is a likely source of impediment.”
–“Design Principles Behind Smalltalk,” Byte 1981(Ingalls 286)

My talk today is concerned with a decade in which we can track the shift from seeing a user-friendly computer as a tool that, through a graphical user interface (GUI), encourages understanding, tinkering, and creativity to seeing a user-friendly computer that uses a GUI to create an efficient work-station for productivity and task-management and the effect of this shift particularly on digital literary production. The turn from computer systems based on the command-line interface to those based on “direct manipulation” interfaces that are iconic or graphical was driven by rhetoric that insisted the GUI, particularly that pioneered by the Apple Macintosh design team, was not just different from the command-line interface but it was naturally better, easier, friendlier. The Macintosh was, as Jean-Louis Gassée (who headed up its development after Steve Jobs’s departure in 1985) writes without any hint of irony, “the third apple,” after the first apple in the Old Testament and the second apple that was Isaac Newton’s, “the one that widens the paths of knowledge leading toward the future.”

Despite studies released since 1985 that clearly demonstrate GUIs are not necessarily better than command-line interfaces in terms of how easy they are to learn and to use, Apple – particularly under Jobs’ leadership – successfully created such a convincing aura of inevitable superiority around the Macintosh GUI that to this day the same “user-friendly” philosophy, paired with the no longer noticed closed architecture, fuels consumers’ religious zeal for Apple products. I have been an avid consumer of Apple products since I owned my first Macintosh Powerbook in 1995; but what concerns me is that ‘user-friendly’ now takes the shape of keeping users steadfastly unaware and uninformed about how their computers, their reading/writing interfaces, work let alone how they shape and determine their access to knowledge and their ability to produce knowledge. As Wendy Chun points out, the user-friendly system is one in which users are, on the one hand, given the ability to “map, to zoom in and out, to manipulate, and to act” but the result is a “seemingly sovereign individual” who is mostly a devoted consumer of ready-made software and ready-made information whose framing and underlying mechanisms we are not privy to.

However, it’s not necessarily the GUI per se that is responsible for the creation of Chun’s “seemingly sovereign individual” but rather a particular philosophy of computing and design underlying a model of the GUI that has become the standard for nearly all interface design. The earliest example of a GUI-like interface whose philosophy is fundamentally different from that of the Macintosh is Douglas Engelbart’s NLS or “oN-Line System” which he began work on in 1962 and famously demonstrated in 1968. While his “interactive, multi-console computer-display system” with keyboard, screen, mouse, and something he called a chord handset is commonly cited as the originator of the GUI, Engelbart wasn’t so much interested in creating a user-friendly machine as he was invested in “augmenting human intellect”. As he first put it in 1962, this augmentation meant “increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems”. The NLS was not about providing users with ready-made software and tools from which they choose or consume but rather it was about bootstrapping, or “the creation of tools for expert computer users” and providing the means for users to create better tools, or tools better suited for their own individual needs. We can see this emphasis on tool-building and customization that comes out of an augmented intellect in Engelbart’s provision of “view-control” (which allows users to determine how much text they see on the screen as well as the form of that view) and “chains of views” (which allows the user to link related files) in his document editing program.

Underlining the fact that the history of computing is resolutely structured by stops, starts, and ruptures rather than a series of linear firsts, in the year before Engelbart gave his “mother of all demos,” Seymour Papert and Wally Feurzeig began work on a learning-oriented programming language they called ‘Logo’ that was explicitly for children but implicitly for learners of all ages. Throughout the 1970s Papert and his team at MIT conducted research with children in nearby schools as they tried to create a version of Logo that was defined by “modularity, extensibility, interactivity, and flexibility”. At this time, the Apple II was the most popular home computer throughout the late 1970s until the mid-1980s and, given its open architecture, in 1977 Logo licensed a public version for Apple II computers as well as for the less popular Texas Instruments TI 99/4. In 1980, Papert published the decidedly influential Mindstorms: Children, Computers, and Powerful Ideas in which he makes claims about the power of computers that are startling for a contemporary readership steeped in an utterly different notion of what accessible or user-friendly computing might mean. Describing his vision of “computer-aided instruction” in which “the child programs the computer” rather than one in which the child adapts to the computer or even is taught by the computer, Papert asserts that they thereby “embark on an exploration about how they themselves think…Thinking about thinking turns the child into an epistemologist, an experience not even shared by most adults” (19). And two years later, in a February 1982 issue of Byte magazine, Logo is advertised as a general-purpose tool for thinking with a degree of intellectuality rare for any advertisement: “Logo has often been described as a language for children. It is so, but in the same sense that English is a language for children, a sense that does not preclude its being ALSO a language for poets, scientists, and philosophers”. Moreoever, for Papert thinking about thinking by way of programming happens largely when the user encounters bugs in the system and has to then identify where the bug is to then remove it: “One does not expect anything to work at the first try. One does not judge by standards like ‘right – you get a good grade’ and ‘wrong – you get a bad grade.’ Rather one asks the question: ‘How can I fix it?’ and to fix it one has first to understand what happened in its own terms.” (101) Learning through doing, tinkering, experimentation, trial-and-error is, then, how one comes to have a genuine computer literacy.

In the year after Papert et al began work on Logo and the same year as Engelbart’s NLS demo, Alan Kay also commenced work on the never-realized Dynabook, produced as an “interim Dynabook” in 1972 in the form of the GUI-based Xerox Alto which could also run the Smalltalk language. Kay thereby introduced the notion of “personal dynamic media” for “children of all ages” which “could have the power to handle virtually all of its owner’s information-related needs”. Kay, then, along with Engelbart and Papert, understood very clearly the need for computing to move from the specialized environment of the research lab and into people’s homes by way of a philosophy of the user-friendly oriented toward the flexible production (rather than rigid consumption) of knowledge. It was a realization eventually shared by the broader computing community for, by 1976, Byte magazine was publishing editorials such as “Homebrewery vs the Software Priesthood” declaring that “the movement towards personalized and individualized computing is an important threat to the aura of mystery that has surrounded the computer for its entire history” (90). And more:

The movement of computers into people’s homes makes it important for us personal systems users to focus our efforts toward having computers do what we want them to do rather than what someone else has blessed for us…When computers move into peoples’ homes, it would be most unfortunate if they were merely black boxes whose internal workings remained the exclusive province of the priests…Now it is not necessary that everybody be a programmer, but the potential should be there…(90).

image1from “Homebrewery vs the Software Priesthood,” Byte magazine October 1976

It was precisely the potential for programming or simply novice as well as expert use via an open, extensible, and flexible architecture that Engelbart, Papert and Kay sought to build into their models of the personal computer to ensure that home computers did not become “merely black boxes whose internal workings remained the exclusive province of the priests.” By contrast, as Kay later exhorted his readers in 1977, “imagine having your own self-contained knowledge manipulator in a portable package the size and shape of an ordinary notebook”. Designed to have a keyboard, an NLS-inspired “chord” keyboard, mouse, display, and windows, the Dynabook would allow users to realize Engelbart’s dream of a computing device that gave them the ability to create their own ways to view and manipulate information. Rather than the over-determined post-Macintosh GUI computer which has been designed to pre-empt every user’s possible need with the creation of an over-abundance of ready-made tools such that “those who wish to do something different will have to put in considerable effort,” Kay wanted a machine that was “designed in a way that any owner could mold and channel its power to his own needs…a metamedium, whose content would be a wide range of already-existing and not-yet-invented media” (403). More, Kay understood from reading Marshall McLuhan, that the design of this new metamedium was no small matter for the very use of a medium changes an individual’s, a culture’s, thought patterns. Clearly, he wanted thought patterns to move toward a literacy that involved reading and writing in the new medium instead of the unthinking consumption of ready-made tools, for, crucially, “the ability to ‘read’ a medium means you can access materials and tools created by others. The ability to ‘write’ in a medium means you can generate materials and tools for others. You must have both to be literate”.

While Kay envisioned the GUI-like interface of the Dynabook would play a crucial role in realizing this “metamedium,” the Smalltalk software driving this interface was equally necessary. Its goal was “to provide computer support for the creative spirit in everyone” (286). Not surprisingly, Kay and his collaborators began working intensely with children after the creation of Smalltalk-71. Influenced by developmental psychologist Jean Piaget as well as Kay’s own observation of Papert and his colleagues’ use of Logo in 1968, Smalltalk relied heavily on graphics and animation through one particular incarnation of the GUI: the Windows, Icons, Menus, and Pointers (or WIMP) interface. Kay writes that in the course of observing Papert using Logo in schools, he realized that these were children “doing real programming…”:

  …this encounter finally hit me with what the destiny of personal computing really was going to be. Not a personal dynamic vehicle, as in Engelbart’s metaphor opposed to the IBM “railroads”, but something much more profound: a personal dynamic medium. With a vehicle one could wait until high school and give “drivers ed”, but if it was a medium, it had to extend into the world of childhood (“The Early History” 81).

As long as the emphasis in computing was on learning – especially through making and doing – the target demographic was going to be children and as long as children could use the system, then so too could any adult provided they understood the underlying structure, the how and the why, of the programming language. As Kay astutely points out, “…we make not just to have, but to know. But the having can happen without most of the knowing taking place”. And, as he goes on to point out, designing the Smalltalk user interface shifted the purpose of interface design from “access to functionality” to an “environment in which users learn by doing” (84). And so Smalltalk designers didn’t so much completely reject the notion of readymade software so much as they sought to provide the user with a set of software building blocks from which the user could then combine and/or edit to create their own customized system. Or, as Trygve Reenskaug (a visiting Norwegian computer scientist with the Smalltalk group at Xerox PARC in the late 1970s) put it:

 …the new user of a Smalltalk system is likely to begin by using its ready-made  application systems for writing and illustrating documents, for designing aircraft wings, for doing homework, for searching through old court decisions, for composing music, or whatever. After a while, he may become curious as to how his system works. He should then be able to “open up” the application object on the screen to see its component parts and to find out how they work together (166).

With an emphasis on learning and building through an open architecture, Adele Goldberg – co-developer of Smalltalk along with Alan Kay and author of most of the Smalltalk documentation – describes the Smalltalk programming environment in this special issue of Byte as one that set out to defy that of the conventional software development environment as illustrated in Figure 1 below:

image2

Image by Adele Goldberg contrasting the conventional philosophy of software driven by “wizards” in Figure 1 versus that provided by Smalltalk for the benefit of the programmer/user in Figure 2.

The Taj Mahal in Figure 1 “represents a complete programming environment, which includes the tools for developing programs as well as the language in which the programs are written. The users must walk whatever bridge the programmer builds” (Goldberg 18). Figure 2, by contrast, represents a Taj Mahal in which the “software priest” is transformed into one who merely provides the initial shape of the environment which programmers can then modify by building “application kits” or “subsets of the system whose parts can be used by a nonprogrammer to build a customized version of the application” (18). The user or non-programmer, then, is an active builder in dialogue with the programmer instead of a passive consumer of a pre-determined (and perhaps even over-determined) environment.

At roughly the same time as Kay began work on Smalltalk in the early 1970s, he was also involved with the team of designers working on the NLS-inspired Xerox Alto which was developed in 1973 as, again, an “interim Dynabook” with a three-button mouse, a GUI which worked in conjunction with the desktop metaphor, and ran Smalltalk. While only several thousand non-commercially available Altos were manufactured, it was – as team members Chuck Thacker and Butler Lampson believe – probably the first computer explicitly called a “personal computer” because of its GUI and its network capabilities. By 1981, Xerox had designed and produced a commercially available version of the Alto, called the 8010 Star Information System, which was sold along with Smalltalk-based software. But as Jeff Johnson et al point out, the most important connection between Smalltalk and the Xerox Star lay in the fact that Smalltalk could clearly illustrate the compelling appeal of a graphical display that the user accessed via mouse, overlapping windows, and icons (22).

image3

Screenshot of Xerox Star from Jeff Johnson et al’s “The Xerox Star: A Retrospective.”

However, the significance of the Star is partly the indisputable impact it had on the GUI design of first the Apple Lisa and then the Macintosh; its significance is also in the way in which it was clearly labeled a work-station for “business professionals who handle information” rather than a metamedium or a tool for creating or even thinking about thinking. And in fact, the Star’s interface – which was the first commercially available computer born out of work by Engelbart, Papert and Kay that attempted to satisfy both novice and expert users in providing an open, extensible, flexible environment and that also happened to be graphical – was conflicted at its core. While in some ways the Star was philosophically very much in line with the open thinking of Engelbart, Papert, and Kay, in other ways its philosophy as much as its GUI directly paved the way to the closed architecture and consumption-based design of the Macintosh. Take for example the overall design principles of the Star which were aimed at making the system seem “familiar and friendly.”

Easy                             Hard

concrete                     abstract
visible                         invisible
copying                      creating
choosing                    filling in
recognizing               generating
editing                        programming
interactive                 batch

Star designers also avowed to avoid the characteristics they list on the right while adhering to a schema that exemplifies the characteristics listed on the left. While there’s little doubt that ease-of-use was of central importance to Engelbart, Papert and Kay – often brought about through interactivity and making computer operations and commands visible – the avoidance of “creating,” “generating,” or “programming” couldn’t be further from their vision of the future of computing. At the same time as the Star forecloses on creating, generating, and programming through its highly restrictive set of commands in the name of simplicity, it also wants to promote users’ understanding of the system as a whole – although, again, we can see this particular incarnation of the GUI represents the beginning of a shift toward only a superficial understanding of the system. Without a fully open, flexible, and extensible architecture, the home computer becomes less a tool for learning and creativity and more a tool for simply “handling information.”

By contrast, as I’ll now talk about, the Apple Macintosh was clearly designed for consumers, not creators. It was marketed as a democratizing machine when in fact it was democratizing only insofar as it marked a profound shift in personal computing away from the sort of inside-out know-how one needed to create on an Apple II to the kind of perfunctory know-how one needed to navigate the surface of the Macintosh – one that amounts to the kind of knowledge needed to click this or that button. The Macintosh was democratic only in the manner any kitchen appliance is democratic. That said, Apple’s redefinition of the overall philosophy of personal computing exemplifies just one of many reversals that abound in this ten year period from the mid-1970s to the mid-1980s. In relation to the crucial change that took place in the mid-1980s from open, flexible, and extensible computing systems for creativity to ones that were closed, transparent, and task-oriented, the way in which the Apple Macintosh was framed from the time of its release in January 1984 represented a near complete purging of the philosophy promoted by Engelbart, Kay, and Papert. This purging of the recent past took place under the guise of Apple’s version of the user-friendly that, among other things, pitted itself against the supposedly “cryptic,” arcane,” “phosphorescent heap” that was the command-line interface as well as, it was implied, any earlier incarnation of the GUI.

However, it’s important to note that, while the Macintosh philosophy purged much of what had come before, it did in fact emerge from the momentum gathering in other parts of the computing industry which were particularly concerned to define standards for the computer interface. Up to this point, personal computers were remarkably different from each other. Commodore 64 computers, for example, came with both a ‘Commodore’ key that gave the user access to an alternate character set as well as four programmable function keys that, with the shift button, could each be programmed for two different functions. By contrast, Apple II computers came with two programmable function keys and Apple III, IIc and IIe computers came with open-Apple and closed-Apple keys that provided the user with shortcuts to applications such as cut-and-paste or copy (in the same way that the contemporary ‘command’ key functions).

No doubt in response to the difficulties this variability posed to expanding the customer base for personal computers, Byte magazine ran a two-part series in October and November 1982 dedicated to the issue of industry standards by way of an introduction to a proposed uniform interface called the “Human Applications Standard Computer Interface” (or HASCI). Asserting the importance of turning the computer into a “consumer product,” author Chris Rutkowski declares that every computer ought to have a “standard, easy-to-use format” that “approaches one of transparency. The user is able to apply intellect directly to the task; the tool itself seems to disappear” (291, 299-300). Of course, a computer that is easy-to-use is entirely desirable; however, at this point ease-of-use is framed in terms of the disappearance of the tool being used in the name of ‘transparency ‘ – which now means usersfwhi can efficiently accomplish their tasks with the help of a glossy surface that shields them from the depths of the computer instead of the earlier notion of ‘transparency’ which referred to a usesr’s ability to open up the hood of the computer to understand directly its inner workings.

Thus, no doubt in a bid to finally produce a computer that realized these ideas and appealed to consumers who are “drivers, not repairmen,” Apple unveiled the Lisa in June 1983 for nearly $10,000 (that’s $23,000 in 2012 dollars) as a cheaper and more user-friendly version of the Xerox Alto/Star which sold for $16,000 in 1981 (which is about $40,000). At least partly inspired by Larry Tesler’s Xerox PARC 1979 demo of the Star to Steve Jobs, the Lisa used a one-button mouse, overlapping windows, pop-up menus, a clipboard, and a trashcan. As Tesler was adamant to point out in a 1985 article on the “Legacy of the Lisa,” it was “the first product to let you drag [icons] with the mouse, open them by double-clicking, and watch them zoom into overlapping windows” (17). The Lisa, then, moved that much closer to the realization of the dream of transparency with, for example, its mode of double-clicking that attempted to have users develop the quick, physical action of double-clicking that bypasses the intellect through physical habit; more, its staggering two 2048K worth of software and three expansion slots also firmly moved it in the direction of a readymade, closed consumer product and definitively away from the Apple II, which, when it was first released in 1977, came with 16K bytes of code and, again, eight expansion slots.

Expansion slots symbolize the direction that computing was to take from the moment the Lisa was released, followed by the release of the Macintosh in January 1984, to the present day. Jeff Raskin, who originally began the Macintosh project in 1979, and Steve Jobs both believed that hardware expandability was one of the primary obstacles in the way of personal computing having a broader consumer appeal. In short, expansion slots made standardization impossible (partly because software writers needed consistent underlying hardware to produce widely functioning products) whereas what Raskin and Jobs both sought was a system which was an “identical, easy-to-use, low-cost appliance computer.” At this point, customization is no longer in the service of building, creating or learning – it is, instead, for using the computer as one would any home appliance and ideally this customization is only possible through software that the user drops into the computer via disk just as they would a piece of bread into a toaster. Predictably, then, the original plan for the Macintosh had it tightly sealed so that the user was only free to use the peripherals on the outside of the machine. While team-member Burrell Smith managed to convince Jobs to allow him to add in slots for users to expand the machine’s RAM, Macintosh owners were still “sternly informed that only authorized dealers should attempt to open the case. Those flouting this ban were threatened with a potentially lethal electric shock”.

That Apple could successfully gloss over the aggressively closed architecture of the Macintosh while at the same time market it as a democratic computer “for the people” marks just one more remarkable reversal from this period in the history of computing. As is clear in the advertisement below that came out in Newsweek Magazine during the 1984 election cycle, the Macintosh computer was routinely touted as embodying the principle of democracy. While it was certainly more affordable than the Lisa (in that it sold for the substantially lower price of $2495), its closed architecture and lack of flexibility could still easily allow one to claim it represented a decidedly undemocratic turn in personal computing.

Thus, 1984 became the year that Apple’s philosophy of the computer-as-appliance, encased in an aesthetically pleasing exterior, flowered into an ideology. We can partly see how their ideology of the user-friendly came to fruition through their marketing campaign which included a series of magazine ads such as the one below—

image9

Advertisement for the Apple Macintosh from the November/December 1984 issue of Newsweek Magazine.

—along with one of the most well-known TV commercials of the late twentieth century.In the case of the latter, Apple takes full advantage of the powerful resonance still carried by George Orwell’s dystopian, post-World War II novel 1984 by reassuring us in the final lines of the commercial that aired on 22 January 1984 that “On January 24th Apple Computer will introduce Macintosh. And you’ll see why 1984 won’t be like ‘1984.’”

Apple positions Macintosh, then, as a tool for and of democracy while also pitting the Apple philosophy against a (non-existent) ‘other’ (perhaps communist, perhaps IBM or ‘Big Blue’) who is attempting to oppress us with an ideology of bland sameness. Apple’s ideology, then, “saves us” from a vague and fictional, but no less threatening, Orwellian, and nightmarish ideology. As lines of robot-like people, all dressed in identical grey, shapeless clothing march into the opening scene of the commercial, a narrator of this pre-Macintosh nightmare appears on a screen before them in something that appears to be a propaganda film. We hear, spoken fervently, “Today we celebrate the first glorious anniversary of the Information Purification Directives.” And, as Apple’s hammer-thrower then enters the scene, wearing bright red shorts and pursued by soldiers, the narrator of the propaganda film continues:

We have created for the first time in all history a garden of pure ideology, where each worker may bloom, secure from the pests of any contradictory true thoughts. Our Unification of Thoughts is more powerful a weapon than any fleet or army on earth. We are one people, with one will, one resolve, one cause. Our enemies shall talk themselves to death and we will bury them with their own confusion.

And just before the hammer is thrown at the film-screen, causing a bright explosion that stuns the grey-clad viewers, the narrator finally declares, “We shall prevail!” But who exactly is the hammer-thrower-as-underdog fighting against? Who shall prevail – Apple or Big Brother? Who is warring against whom in this scenario and why? In the end, all that matters is that, at this moment, just two days before the official release of the Macintosh, Apple has created a powerful narrative of its unquestionable, even natural superiority over other models of computing that continues well into the twenty-first century. It is an ideology that of course masks itself as such and that is born out of the creation of and then opposition to a fictional, oppressive ideology we users/consumers need to be saved from. In this context, the fervor with which even Macintosh team-members believed in the rightness and goodness of their project is somewhat less surprising as they were quoted in Esquire earnestly declaring, “Very few of us were even thirty years old…We all felt as though we had missed the civil rights movement. We had missed Vietnam. What we had was the Macintosh”.

Even non-fiction accounts of the Macintosh by non-Apple employees could not help but endorse it in as breathless terms as those used by the Macintosh team-members themselves. Steven Levy’s Insanely Great, from 1994, is a document as remarkable for its wholesale endorsement of this new model of personal computing as any of the Macintosh advertisements and guide-books. Recalling his experience seeing a demonstration of a Macintosh in 1983, he writes:

Until that moment, when one said a computer screen “lit up,” some literary license was required…But we were so accustomed to it that we hardly even thought to conceive otherwise. We simply hadn’t seen the light. I saw it that day…By the end of the demonstration, I began to understand that these were things a computer should do. There was a better way (4).

The Macintosh was not simply one of several alternatives – it represented the unquestionably right way for computing. And even at the time of his writing that book, in 1993, he still declares that each time he turns on his Macintosh, he is reminded “of the first light I saw in Cupertino, 1983. It is exhilarating, like the first glimpse of green grass when entering a baseball stadium. I have essentially accessed another world, the place where my information lives. It is a world that one enters without thinking of it…an ephemeral territory perched on the lip of math and firmament” (5). But it is precisely the legacy of the unthinking, invisible nature of the so-called “user-friendly” Macintosh environment that has foreclosed on using computers for creativity and learning and that continues in contemporary multi-touch, gestural, and ubiquitous computing devices such as the iPad and the iPhone whose interfaces are touted as utterly invisible (and so their inner workings are de facto inaccessible).

References

“‘1984’ Apple Macintosh Commercial.” Youtube. 27 Aug. 2008. Web. 21 June 2012.

Apple Computer Inc. Apple Human Interface Guidelines: The Apple Desktop Interface. Reading, MA: Addison-Wesley, 1987.

Bardini, Thierry. Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal Computing. Stanford, CA: Stanford UP, 2000.

Chen, Jung-Wei and Jiajie Zhang. “Comparing Text-based and Graphic User Interfaces for Novice and Expert Users.” AMIA Annual Symposium Proceedings Archive. 2007. Web. 14 February 2012.

Chun, Wendy. Programmed Visions: Software and Memory. Boston, MA: MIT Press, 2011.

Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework.” in The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Boston, MA: MIT UP, 2003. 95-108.

—. “Workstation History and the Augmented Knowledge Workshop.” Doug Engelbart Institute. 2008. Web. 3 April 2011.

—, and William English. “A Research Center for Augmenting Human Intellect.” in The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Boston, MA: MIT UP, 2003. 233-246.

Erickson, Thomas D. “Interface and the Evolution of Pidgins: Creative Design for the Analytically Inclined.” In The Art of Human-Computer Interface Design. Ed. Brenda Laurel. Reading, MA: Addison-Wesley Publishing Company, Inc., 1990. 11-16

Gassée, Jean-Louis. The Third Apple: Personal Computers & the Cultural Revolution. San Diego, New York, London: Harcourt Brace Jovanovich Publishers, 1985.

Goldberg, Adele. “Introducing the Smalltalk-80 System.” Byte 6:8 (August 1981): 14-26.

Hertzfeld, Andy and Steve Capps et al. Revolution in the Valley. Sebastopol, CA: O’Reilly, 2005.

Ingalls, Daniel. “Design Principles Behind Smalltalk.” Byte 6:8 (August 1981): 286-298.

Johnson, Jeff and Theresa Roberts et al. “The Xerox Star: A Retrospective.” Computer 22:9 (September 1989): 11-29.

Johnson, Steven. Interface Culture: How New Technology Transforms the Way We Create and Communicate. New York: Basic Books, 1997.

Kay, Alan. “The Early History of Smalltalk.” Smalltalk dot org. Web. 5 April 2012.

—. “User Interface: A Personal View.” in The Art of Human-Computer Interface Design. Ed. Brenda Laurel. Reading, MA: Addison-Wesley Publishing Company, Inc., 1990. 191-207.

—, and Adele Goldberg. “Personal Dynamic Media.” in The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Boston, MA: MIT UP, 2003. 393-409.

Levy, Steven. Hackers: Heroes of the Computer Revolution. 25th Anniversary Edition. New York: O’Reilly Media, 2010.

—. Insanely Great: The Life and Times of Macintosh, the Computer that Changed Everything. New York: Viking, 1994.

Lewis, T.G. “Some Laws of Personal Computing.” Byte 4:10 (October 1979): 186-191.

Linden, Ted, Eric Harslem, Xerox Corporation. Office Systems Technology: A Look Into the World of the Xerox 8000 Series Products: Workstations, Services, Ethernet, and Software Development. Palo Alto, CA: Office Systems Division, 1982.

“LOGO.” Advertisement. Byte 7:2 (February 1982): 255.

Morgan, Chris and Gregg Williams, Phil Lemmons. “An Interview with Wayne Rosing, Bruce Daniels, and Larry Tesler: A Behind-the-scenes Look at the Development of Apple’s Lisa.” Reprinted from Byte magazine 8:2 (February 1983): 90-114. Web. 14 April 2012.

Nelson, Theodor. “Computer Lib / Dream Machines.” The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Cambridge, MA: MIT Press, 2003. 303-338.

Papert, Seymour. Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books, 1980.

Reenskaug, Trygve. “User-Oriented Descriptions of Smalltalk Systems.” Byte 6:8 (August 1981): 148-166.

Reimer, Jeremy. “Total share: 30 years of personal computer market share figures.” Ars Technica. 2006. Web. 4 December 2011.

Rutkowski, Chris. “An Introduction to the Human Applications Standard Computer Interface: Part 1: Theory and Principles.” Byte 7:10 (October 1982): 291-310.

—. “An Introduction to the Human Applications Standard Computer Interface: Part 2: Implementing the HASCI Concept. ” Byte 7:11 (November 1982): 379-390.

Smith, David Canfield and Charles Irby et al. “Designing the Star User Interface.” Byte 7:4 (April 1982): 242-282.

Tesler, Larry. “The Legacy of the Lisa.” Macworld magazine (September 1985): 17-22.

Wardrip-Fruin, Noah. “Introduction.” “A Research Center for Augmenting Human Intellect.” By Douglas Engelbart. in The New Media Reader. Eds. Noah Wardrip-Fruin and Nick Montfort. Boston, MA: MIT UP, 2003. 231-232.

“What is Logo?” The Logo Foundation. 2011. Web. 5 April 2012.

Whiteside, John and Sandra Jones, Paul S. Levy, Dennis Wixon. “User Performance with Command, Menu, and Iconic Interfaces.” CHI 1985 Proceedings. April 1985. 185-191.

Wilber, Mike and David Fylstra. “Homebrewery vs the Software Priesthood.” Byte 14 (October 1976): 90-94.

Williams, Gregg. “The Lisa Computer System: Apple Designs a New Kind of Machine.” Product Description. Byte 8:2 (February 1983): 33-50.

Wozniak, Steve. “The Apple-II.” System Description. Byte 2:5 (May 1977): 34-43.

“The whole world is faking it”: Computer-Generated Poetry as Linguistic Evidence

The following is a short review I wrote of discourse.cpp (pdf available here) by O.S. le Si, ed. Aurélie Herbelot, published by the Berlin-based Peer Press in 2011. The review was just published in the December issue of Computational Linguistics.

*

discourse.cpp (Peer Press, 2011) is a short collection of computer-generated poetry edited by computational linguistics scholar Aurélie Herbelot, produced by a computer called O.S. le Si mainly used for natural language processing, and named after a program which tries to identify the meanings of words based on their context. In this case, Herbelot inputted 200,000 pages from Wikipedia for the program to then parse and output lists of items whose context is similar to words such as “gender,” “love,” “family,” and “illness;” for example, Herbelot explains that content in the opening piece titled “the creation” was “selected out of a list of 10,000 entries. Each entry was produced by automatically looking for taxonomic relationships in Wikipedia”; and, for the piece titled “gender,” she chose the “twenty-five best contexts for man and woman in original order. No further changes.” (47) The collection is, then, as we are told on the back-cover, “about things that people say about things. It was written by a computer.”

Poets – or, for the sake of those still attached to the notion of an author who intentionally delivers well-crafted, expressive writing, “so-called poets” – have been experimenting with producing writing with the aid of digital computer algorithms since Max Bense and Theo Lutz first experimented with computer-generated writing in 1959. The most well-known English-language example is the 1984 collection of poems The Policeman’s Beard is Half-Constructed by the Artificial Intelligence program Racter (a collection which was, it was later discovered, heavily edited by Racter creators William Chamberlain and Thomas Etter). discourse.cpp is yet another experiment in testing the capabilities of the computer and computer-programmer to create not so much “good” poetry as revealing poetry – poetry that is not meant to be close-read (most often to discover underlying authorial intent) but rather read as a collection of a kind of linguistic evidence. In this case, the collection provides evidence of the computer program’s probings of trends in online human language usage which in turn, not surprisingly, provides evidence of certain prevailing cultural norms; for example, we can see quite clearly our culture’s continued attachment to heteronormative gender roles in “Gender”:

Woman                        Man
man love —                    — win title
— marry man                — love woman
— give birth                   — claim be (18)

More, this linguistic evidence also draws attention to the ever-increasing intertwinement of human and digital computer and the resulting displacement of the human as sole reader-writer now that the computer is also a reader-writer alongside (and often in collaboration with) the human.

As Herbelot rightly points out in the “Editor’s Foreword,” to a large extent this experimentation with the computer as reader-writer also comes out of early twentieth century, avant-garde writing that similarly sought to undermine, if not displace, the individual intending author. Dadaist Tristan Tzara, for instance, infamously wrote “TO MAKE A DADAIST POEM” in 1920 in which he advocates writing poetry by cutting out words from a newspaper article, randomly choosing these words from a hat, and then appropriating these randomly chosen words to create a poem by “an infinitely original author of charming sensibility.” Tzara was, of course, being typically Dadaist in his tongue-in-cheek attitude; but he was also, I believe, serious in his belief that the combination of appropriation and chance-generated methods of producing text could produce original writing that simultaneously undermined the egotism of the author. However, insofar as discourse.cpp comes out of a lineage of experimental writing invested in chance-generated writing and, later, in exploiting computer technology as the latest means by which to produce such writing, it also comes out of a certain tradition of disingenuousness that comes along with this lineage. No matter how much Tzara and later authors of computer-generated writing sought to remove the human-as-author, there was and still is no getting around the fact that humans are in fact deeply involved in the creation process – whether as cutters-and-pasters, computer programmers, inputters, or editors. The collection, then, is a much more complex amalgam than even Herbelot seems willing to acknowledge as discourse.cpp is evidence of the evenly distributed reading and writing that took place between Herbelot and the computer/program itself.

Media Archaeology and Digital Stewardship

I was fortunate to have the chance to think through the relationship between the field of media archaeology, the Media Archaeology Lab, and digital preservation/stewardship thanks to this interview with Trevor Owens on the Library of Congress blog, The Signal, called “Media Archaeology and Digital Stewardship: An Interview with Lori Emerson.” The invitation to talk with Trevor was particularly fortuitous because Matthew Kirschenbaum had been here at CU Boulder the week before, discussing these very same issues in a faculty seminar he led called “Doing Media Archaeology.” You can read the interview here – I’d be interested in hearing comments you might have, especially about the possibility of a hardware/software resource sharing program.

mobile poetics: a select bibliography of digital textuality/art apps

I’ve been building a bibliography for awhile now of digital textuality/art apps for the iPhone and iPad. The list below is far from complete but hopefully useful to those of you teaching students how to read and/or write digital textuality/art. Some link directly to the download page while others link to pages with information on particular apps. Please let me know if you have any other works you think I should add to this list.

latest addition to the Archeological Media Lab: original “First Screening” 5.25 inch floppies

I had the great fortune of meeting Lionel Kearns in Vancouver last spring and discussing bpNichol’s 1984 Apple IIe poem “First Screening.” (If you don’t know Kearns, he is a longtime Vancouver-based poet who was a student of Earle Birney and also one of the four people to first rescue “First Screening.”) After explaining that I had managed, with the assistance of Jim Andrews, to obtain copies of “First Screening” for the Archeological Media Lab to run on the Apple IIe’s, Kearns immediately and generously offered to donate original working copies of the poems that bp was working on when he visited Kearns in the early 80s. I’m thrilled to report the floppies arrived last week, safe and sound, with this note from Kearns: “I am not sure of the actual date, but it was some time previous to the actual publication on disk of the collection of poems by Underwhich Editions.”