Excavating, Archiving, Making Media Inscriptions // In and Beyond the Media Archaeology Lab

“Excavating, Archiving, Making Media Inscriptions // In and Beyond the Media Archaeology Lab” appears in Inscription, published in Gothenburg, Sweden by their Regional State Archives (2018): 247-272.

*

This autobiographical short essay provides a snapshot of how scholarship might engage with the materiality of seemingly immaterial media and thus, by extension, with the ways in which verbal/visual expressions both inscribe and are inscribed by these same media. I begin with an overview of the founding of the Media Archaeology Lab (MAL) back in 2009 and what it has evolved into since the lab moved into a new space in 2012 – a lab that tries to imagine and even reimagine what a lab could be, what it could do, in the arts/humanities. I then move on to a discussion of how the lab drives my personal research projects – from my book Reading Writing Interfaces to “Other Networks” – and thus, by extension, I hope to show the way any media archaeology lab could drive humanities-based research. The overarching argument of this piece is about the value of the interpenetration of excavating, archiving, and making material, mediated inscriptions as that which drives thinking.

The Media Archaeology Lab
The near compulsion I have had for the five years or so with trying to understand the inner workings of obsolete computers has caught me by surprise, especially since I did not own a personal computer until the mid-1990s and it has not been until recently that I have dabbled in programming and started in earnest to understand how computers work. But in retrospect the move from poetics to media archaeology makes sense and even points to one way that literary studies could be rejuvenated by expanding its sense of itself as a discipline invested in close reading (and even distant reading) whatever is conventionally accepted as a literary text. That is, literary studies could also, not instead, read both media and media inscriptions not as literary texts but as artifacts worthy of the same attention and methodological approach.

My educational background is in experimental poetry and poetics and I picked this field in the late 1990s because of my interest in the materiality of poetic expression, whether sound poetry and the material presence of the body or concrete poetry and the material shape, size, texture of individual letters created via letterpress, typewriter, or dry-transfer lettering. It was a logical move, then, into researching digital poetry as another form of experimental writing and thinking about the nature of materiality in digital poetry and electronic literature more broadly. Specifically, the work of Canadian poet bpNichol served as the crucial bridge from sound poetry, concrete poetry, and the broad range of intermedia works he produced throughout his life to digital poetry as Nichol created “First Screening” in 1982-1983 – one of the earliest kinetic digital poems. More, alongside a handful of colleagues who also work in or run labs (colleagues such as Dene Grigar, Matthew Kirschenbaum, and Darren Wershler) I soon started to see that original platforms for these works from the 1980s and early to mid-1990s were essential to the works themselves for two reasons. For one, shortly before I founded the MAL in the early to mid-2000s, anyone with an interest in early works of digital literature or art such as Nichol’s needed the original platform just to access the work (not to mention preserve it), especially in cases where no one had yet created an emulation. For another, I also started to see how original platforms are part and parcel of the works themselves, not to mention how access to these platforms gives us a deeper understanding of early digital works and how they were produced. In other words, the textual/visual elements of “First Screening” are inseparable from the underlying code (which includes a permutational code-poem not visible on screen) as well as the infrastructure of the Apple IIe computer it was written on and for, from the functioning of the machine’s floppy drive and its uniquely designed keyboard to the circuit boards and RAM chips accessible from the backside of the keyboard.

In 2009, my own solution to this need for preservation and access in digital art/literature was to create the MAL (originally named the Archaeological Media Lab) after receiving a small start-up grant from the University of Colorado Boulder.  The lab was first housed in a 7′ x 14′ foot room in a 1940s house on campus and it only housed fifteen functioning Apple IIe computers – enough so that students in my classes as well as researchers could run the original version of “First Screening” on 5.25″ floppy disks.

I tried selling the lab to the larger public during these early years by arguing that it was an entity for supporting a locavore approach to sustaining digital literature, declaring that there’s no suitable online or virtual equivalent to coming to the lab and using the original machines and the original works themselves (a pitch I also hoped justified our very modest online presence). In 2012, several English department administrative assistants arranged for what felt like a miraculous space exchange that gave the lab an entire 1200 square foot basement in another older home on campus. Ever since, thanks to this new/old space and the freedom I was granted by my university to do whatever I wanted, the lab has exploded into an utterly open-ended space for just about any kind of experimentation.

Before I expand on the specifics of the lab, I would like to emphasize that I was fortunate during these early years in having no higher authority to report to and this, coupled with the lack of a hierarchical structure, is largely what made the lab into what it is now – a fluid space for creatively undertaking research or any kind of writerly/artistic practice and one that shifts and changes according to whoever participates in the lab from one year to the next. I would also like to emphasize that over the last couple of years I have come to see that having both artists and humanists/critics involved in the ongoing process of building a lab is an extremely effective way to intervene in the science-dominated culture of labs that are all too often tightly controlled spaces closed to anyone (member of the public or member of the institution) not affiliated with the research group, procedurally rigid, as well as controlled from the top down. I have no interest in taking on “lab” as a way to emulate the sciences in this regard and I also have no interest in trying to legitimize work that takes place in the lab by giving it the veneer of scientific work. By contrast, the MAL is intended to be a porous, flexible, creative space for, again, hands-on doing/tinkering/playing/creating as an instigator for rigorous thinking in whatever register people would like. It is a place where we excavate popular analog and digital inscription devices along with unusual or rare counterparts, where we produce practice-based research projects as well as make available opportunities for artists to “make” via artist residencies, and where we simultaneously archive these devices and the art/literature created on or with these devices.

More specifically, as of October 2016, the MAL houses roughly 1500 still-functioning individual items, including analog media from the late nineteenth century through the twentieth, digital computer hardware and software, handheld devices, game consoles, and a substantial collection of manuals, journals, magazines, and books on early computing from the 1950s through the 2000s.

Just in terms of our collection of digital devices, we house thirty-five portables, seventy-three desktops, twenty-two handhelds, ten game consoles, and eight other computing devices. Given this substantial collection of media items that are meant to be turned on and actively used, the lab is therefore not a museum but rather a place where denizens can “do” media archaeology in any number of ways. That is, denizens may undertake research projects or artist residencies that involve taking apart or excavating layers of certain devices to understand how they work, tracking the manufacturing history of their parts, or putting old and new devices in conversation with each other to see how the underlying structure of seemingly similar media produces entirely different literary or artistic products; they may also engage in hands-on teaching with high school students, undergraduates and graduates to demonstrate how past technologies provide a way to re-see (or defamiliarize) present technologies and even help to resee future technologies; finally, denizens may also learn how to accession and catalog items also while collaborating on creating metadata schemes to describe holdings, learning the ways in which description (especially ones that are overly concerned with the outward appearance of an item rather than its functionality) may pre-determine and even over-determine our understanding of the item being described.

In short, then, the MAL is unique for a number of reasons. Rather than being hierarchical and classificatory both in its display/organization of inscription devices as well as people, it’s porous, flat, and branching; devices are organized in any way participants want; everything is functional and made to be turned on and experimented with to see what difference it makes to use one device or piece of software over another to denaturalize present ubiquitous technologies. Rather than setting out to adhere to specific outcomes and five year plans, we change from semester to semester and year to year depending on who’s spending time in the lab. Rather than being an entity you need to apply to be a part of or an entity you can only participate in as a student, researcher, faculty member, or librarian, for example, anyone may participate in the lab and have a say about what projects we take on and what kinds of work we do. Rather than being about the display of precious objects whereby you only ever get a sense of the external appearance or even external functionality of the objects, we encourage people to tinker, play, open things up, and disassemble. Rather than the perpetuation of neat, historical narratives about how things came to be, we encourage an experimental approach to time – we put Edison phonograph disks from 1912 beside contemporary proprietary software or we place the Vectrex gaming console and its lightpen from 1983 next to a contemporary tablet and stylus. And finally, rather than participating in the process of erasing the knowledge production process or perpetuating the illusion of a separation between those who work in the lab and the machines they work on and hiding the agency of the machines themselves as well as the agency of the larger infrastructure of the lab, we are interested in constantly situating anything and everything we do in the lab and being self-conscious, descriptive about the minute particularities of the production process for any projects we undertake.

Reading Writing Interfaces
While the MAL is defined by constant change and experimentation, I want to make it clear that I also use the lab to produce the epitome of traditional research in the humanities: the single-authored monograph book. Reading Writing Interfaces (published by the University of Minnesota Press in 2014) represents my attempt to mesh together media archaeology via the lab, media studies, and literary studies by way of a reverse chronology which I use to move back in time to look at how personal computing could have been otherwise and still could be otherwise. I also use media archaeology and the reverse chronology not just to give lip service to how we are all against “narratives of technological progress” but as a way to actually uncover examples of earlier interfaces, earlier modes of personal computing that have capabilities our contemporary devices do not have. At the same time as I identify earlier, now-defunct interfaces, I look at how writers from the present moment back to the 1980s, 1970s, 1960s, and even back to the late nineteenth century registered the affordances of other and older writing technologies by working with and against interfaces (from iPad app to command line, typewriter, and pen/paper). In this context, I’ve been working more and more on developing specific theories of media poetics – ways in which writers are not just registering through writing the affordance of whatever writing technology is at hand, but also how their media studies disrupt the ways in which corporations now work to make these affordances invisible and then celebrate invisibility in terms of the wonders of seamlessness, intuitiveness, transparency, and user-friendly.

Media poetics has become a powerful argument for the value of literature – not as something that expresses “who we are”, whoever that is, and not as something that necessarily tells stories about who we are (though it could, and sometimes does so powerfully and better than any other medium) but as something that registers media effects through inscriptions we recognize as linguistic. More than just everyday use of media, whether they are made with pen and paper, typewriter, personal computer, or on a network, these works of media poetics are limit cases of the capabilities of specific media, expressions of machines themselves just as much as they are expressions of human authors. Media poetics therefore also opens up the possibility of reading literature less for what it says and more for how it says and how  it reads its own writing process. The latter also means replacing the practice of close-reading with descriptions of media effects as an alternative mode of reading. It takes “reading” off the spectrum of close and distant reading altogether and just re-orients the object of what’s being read altogether.

However, I never would have come to above realizations about media poetics in Reading Writing Interfaces without having the chance to tinker in the Media Archaeology Lab and have hands-on access to different competing interfaces from the 70s through the 80. For example, if one just spends an hour or two in the lab, one cannot help but see how easy it is – and necessary – to open up any one of the lab’s Apple II computers and actively intervene in the machine’s capabilities rather than have it determined for me, as one experiences when one interacts with almost any Apple computer released after the Macintosh in 1984. In other words, to return once more to the example of bpNichol’s “First Screening,” the lab makes it perfectly clear not only that this piece is so much more than the text that moves across the screen (contrary to what one might believe if one only has access to the Quicktime movie emulation) and it is so precisely because of the piece’s native platform.

Other Networks
“Other Networks,” the project I am currently working on, also comes out of the MAL but originally emerged from an innocent question my colleague Matthew Kirschenbaum asked me about the place of the 1990s in my book; after mulling it over for nearly a month afterward, I realized that writers/artists were not so much working with/against the material constraints of hardware in the 1990s but rather they were working with/against networks – the newly inaugurated World Wide Web as well as thousands of other networks such as Bulletin Board Systems, Slow Scan TVs, and cable-based networks such as Canada’s NABU. The project also comes out of our experiments in the MAL to get a phone line set up and then try to surf the web using, first, the experimental browser WebStalker also from 1997, a version of Netscape from 1995, as well as MOSAIC from 1993- the first Graphical User Interface used to access to the WWW.

All of these interfaces offer completely different experiences of being online and utterly different kinds of access to information that nowadays is brought to us through the utterly naturalized interface of Google. Moreover, our inability to get many of these browsers working because of incompatible infrastructure (from the nature of our phone line to the capabilities of our campus servers) brought into relief the ways in which different networks and their differing interfaces not only shape visual/verbal inscriptions visible at the level of the screen but they more fundamentally shape the way the underlying layers are inscribed on each other.

“Other Networks” is, then, a network archaeology that once again moves from the present to the mid-60s, covering the odd history of telecommunications networks that pre-date the Internet and/or that exist outside of the Internet, networks that were imagined, planned, and created right alongside the tumultuous history of user friendly interfaces and personal computing I touch on above. The point of the project is to imagine how the Internet or how networks in general were otherwise, could have been otherwise, and still could be otherwise. At the same time, I will once again be looking at the art and literature created on these networks as they are unmatched expressions of the limits and possibilities of the networks themselves – registering networked media noise. What follows is a brief overview of three networks, some of the hardware of which is currently in the MAL.

3.1 OCCUPY.HERE
Since the project moves from present to past, one of the first networks I discuss is called OCCUPY.HERE, created in 2012 in parallel with the Occupy Movement. OCCUPY.HERE claims it exists entirely outside of the Internet and it describes itself as “inherently resistant to surveillance.” It consists of a wifi router near Zuccotti Park in New York City and anyone with a smartphone or laptop within range of it can access it through a portal website that opens up onto what the creators describe as a Bulletin-Board System style message board on which users can share messages and files.

OCCUPY.HERE is, then, an example of a darknet – a network that uses non-standard protocols, anonymizes its users, and creates connections only between trusted users. Not surprisingly, darknets have been viewed with suspicion since the 1970’s when the U.S. military’s Advanced Research Projects Agency (ARPA) coined the term “darknet” to refer to networks unavailable via ARPANet. A new wave of concern about darknet came about in 2002 when a group of Microsoft researchers published a paper titled “The Darknet and the Future of Content Protection” (Biddle et al), arguing the darknet was the greatest stumbling block to the control of digital content and devices that have already been sold to consumers. Given this history, it is not surprising that the Occupy Movement was so attracted to darknets as a means to utterly circumvent not just surveillance, but the entire economic underpinning of the internet.

3.2 The Thing
As readers work their way through descriptions of these other networks, they will undoubtedly start to notice the way the reverse chronology turns up a distinctly nonlinear or perhaps recursive history of networks, where networks emerge, disappear, and reappear slightly recalibrated. I pointed out above that OCCUPY.HERE calls itself a bulletin board system (BBS), which is a kind of network that emerged in the late 1970s. While nearly all histories of the internet agree BBSes generally died out with the introduction of the World Wide Web in the early 1990s, OCCUPY.HERE is a small but effective disruption to this narrative.

Surprisingly, while there are plenty of self-published first person accounts, and plenty of enthusiasts who reminisce online about their years running or participating in BBSes, no media studies oriented book-length account has yet been written on probably the most important telecommunications network of the 1980s and 1990s. To give you some historical context, the first BBS system, called a CBBS or Computerized Bulletin Board System, was created in 1978 and was originally conceived as a computerized version of an analog bulletin board for exchanging information. Thereafter, each BBS had a dedicated phone number, which generally meant that only one person could dial in at a time; also, most BBSes were communities of local users because of how prohibitively expensive it was to make long-distance phone calls; these local users could use it to share files, read news, exchange messages publicly or privately, play games, and even create art. ANSI and ASCII art, for example, were popular art forms on BBSes.

One BBS I am particularly interested in is The Thing – a BBS that New York artist Wolfgang Staehle started in 1991, just one month after Tim Berners-Lee launched the World Wide Web. It was an online community center for artists and writers, a virtual exhibition space, and later a node in a network of international The Thing BBSes. But what particularly fascinates me about The Thing is the way in which the network itself was conceived as an artwork rather than any individual pieces of content that were uploaded to it. also hardware now in the MAL. As Staehle himself has put it,” “The whole meaning of it would come out of the relationships between the people and not the modernist ideal of the single hero artist that the market loves…”  (quoted in Kopstein)

3.3 Artex
Another important premise of the “Other Networks” project is to work against the widely accepted yet inaccurate, U.S.-centric story of the “invention” of the internet. Even well into the 1990s, there were an inordinate number of thriving networks all around the world that existed outside of the Internet and many succeeded because of government policy. For example, one network supported by the Canada Council was ARTEX, originally called ARTBOX. Founded in 1980 and lasting until 1991, ARTEX, or the ELECTRONIC ART EXCHANGE PROGRAM, was a simple and cheap electronic mail program designed to be used by artists and writers interested in what they called “alternative uses of advanced technology.”  The program and the network were provided by I.P. Sharp Associates timesharing network, a company based first in Toronto and then expanding its reach with offices (and thereby network nodes) all around the world.

An example of a writerly use of the network is Norman White’s “Hearsay” that dates from November 1985, which was a tribute to Canadian poet Robert Zend who had died a few months earlier and was known for, among other things, creating the astonishingly beautiful collection of typescapes, or typewriter landscapes, called ARBORMUNDI. “Hearsay” builds on the following text Zend wrote in 1975:

THE MESSENGER ARRIVED OUT OF BREATH. THE DANCERS STOPPED THEIR  PIROUETTES, THE TORCHES LIGHTING UP THE PALACE WALLS FLICKERED FOR A MOMENT, THE HUBBUB AT THE BANQUET TABLE DIED DOWN, A ROASTED PIG’S KNUCKLE FROZE IN MID-AIR IN A NOBLEMAN’S FINGERS, A GENERAL BEHIND THE PILLAR STOPPED FINGERING THE BOSOM OF THE MAID OF HONOUR. “WELL, WHAT IS IT, MAN?” ASKED THE KING, RISING REGALLY FROM HIS CHAIR. “WHERE DID YOU COME FROM? WHO SENT YOU? WHAT IS THE NEWS?” THEN AFTER A MOMENT, “ARE YOU WAITING FOR A REPLY? SPEAK UP MAN!” STILL SHORT OF BREATH, THE MESSENGER PULLED HIMSELF TOGETHER. HE LOOKED THE KING IN THE EYE AND GASPED: “YOUR MAJESTY, I AM NOT WAITING FOR A REPLY BECAUSE THERE IS NO MESSAGE BECAUSE NO ONE SENT ME. I JUST LIKE RUNNING.

White’s “Hearsay,” on the other hand, was an event based on the children’s game of “telephone” whereby a message is whispered from person to person and arrives back at its origin, usually hilariously garbled. Zend’s text was sent around the world in 24 hours, roughly following the sun, via I.P. Sharpe Associates network. Each of the eight participating centers was charged with translating the message into a different language before sending it on. The final version, translated into English, arrived in Toronto as a fascinating example of a literary experiment with semantic and media noise:

THE DANCERS HAVE BEEN ORDERED TO DANCE, AND BURNING  TORCHES WERE PLACED ON THE WALLS.

THE NOISY PARTY BECAME QUIET.

A ROASTING PIG TURNED OVER ON AN OPEN FLAME.

THE KING SAT CALMLY ON HIS FESTIVE CHAIR, HIS HAND ON A WOMAN’S BREAST.

IT APPEARED THAT HE WAS SITTING THROUGH A MARRIAGE CEREMONY.

THE KING ROSE FROM HIS SEAT AND ASKED THE MESSENGER WHAT IS TAKING PLACE AND WHY IS HE THERE? AND HE WANTED AN ANSWER.

THE MESSENGER, STILL PANTING, LOOKED AT THE KING AND REPLIED: YOUR MAJESTY, THERE IS NO NEED FOR AN ANSWER. AFTER ALL, NOTHING HAS HAPPENED. NO ONE SENT ME. I RISE ABOVE EVERYTHING.

*

What I have just provided, then, is a brief overview of a three “other networks.” Again, I am attempting to incorporate ideas about materiality, excavation, and the importance of hands-on work from media archaeology to undertake a network archaeology of the many different and even conflicting networks that existed before the Internet consolidated all the different networks under one protocol (TCP/IP). The larger point of this work, however, is to point to how the history of how we arrived at the Internet, and of how it came to be, is much more muddied, contradictory, and strange than has yet been accounted for.

To circle back to the beginning of my essay, I would like to note once more that the MAL and the underlying philosophy of the lab is really what is driving this project, just like it drove Reading Writing Interfaces. The project continues to develop ideas – originating in the lab and discussed in my book – about ruptures, about interfering in narratives of technological progress, about the tight connection between the materiality of a machine and what’s created on or with that machine – all ideas that come into focus when one has hands-on access to the original media.

Works Cited

Biddle, Peter, Paul English, Marcus Peinado, Bryan Willmann. “The Darknet and the Future of Content Protection.” Digital Rights Management, eds. Eberhard Becker, Willms Buhse, Dirk Günnewig, Niels Rump, Heidelberg, Springer Berlin Heidelberg 2003.

Emerson, Lori, Reading Writing Interfaces: From the Digital to the Bookbind, Minneapolis 2014.

Kopstein, Joseph. “‘The Thing’ Redialed: how a BBS changed the art world and came back from the dead.'” The Verge (13 March 2013). <http://www.theverge.com/2013/3/15/4104494/the-thing-reloaded-bringing-bbs-networks-back-from-the-dead&gt;.

Nichol, bp. “First Screening.”<http://vispo.com/bp/index.htm&gt;.

Media Archaeology Lab. <http://mediaarchaeologylab.com&gt;.

Occupy.Here. <https://occupyhere.org/&gt;.

White, Norman. “Hearsay.” <http://alien.mur.at/rax/ARTEX/hearsay.html&gt;.

Zend, Robert. ARBORMUNDI, Vancouver, Canada: Blewointment Press, 1982.

—“The Message.” From Zero to One. Mission, Canada: Sono Nis Press, 1973, p. 61.

 

Advertisements

Media Archaeology Lab as Platform for Undoing and Reimagining Media History

“The Media Archaeology Lab as Platform for Undoing and Reimagining Media History” appears in Hands on Media History: A new methodology in the humanities and social sciences (Routledge 2019), edited by Nick Hall and John Ellis.

*

Introduction
It is hard not to notice the rapid proliferation of labs in the arts and humanities over the last ten years or so – labs that now number in the thousands in North America alone and that are anything from physical spaces for hands-on learning and research to nothing more than a name for an idea or a group of people with similar research interests, or perhaps a group of people who share only a reading list and have no need for physical space and no interest in taking on infrastructural thinking through shared physical space. Regardless of their administrative organization, focus, funding, equipment or outputs (or lack thereof), the proliferation of these labs reflects a sea-change in how the humanities are trying to move away from the 19th century model of academic work typified by the single scholar who works in the boundaries of a self-contained office and within the confines of their discipline to produce a single-authored book that promotes a clearly defined set of ideas.

Instead, humanities scholars seem to be rallying around the term “lab” (along with “innovation” and “interdisciplinary” and “collaborative” – terms that are all invoked whenever the topic of labs come up) likely because this particular term and structure helps scholars put into better focus their desires for a mode of knowledge production appropriate to the 21st century – what one might call “posthumanities” after Rosi Braidotti’s articulation of it in The Posthuman as a humanities practice focused on human-non-human relationships, “heteronomy and multi-faceted relationality” and one that also openly admits, in Braidotti’s words once more, that “things are never clear-cut when it comes to developing a consistent posthuman stance, and linear thinking may not be the best way to go about it.” For me, in more concrete terms, this version of posthumanities work means pursuing modes of knowledge production that are quick on their feet, responsive, conversational or dialogical, emergent. collaborative, transparent, and self-conscious. They are interested in recording their knowledge production processes; and experimental about what constitutes a rigorous knowledge production and distribution process. These are perhaps by now tired clichés of the kind of work many would like to do, many believe they do, and that many administrators would like to see humanists do; but it is still worth noting that – more because of a longstanding lack of access to both material and immaterial resources than a lack of imagination – very few are actually able do this kind of work. This trend to create labs, even if only in name, is also a response to pressures humanists are feeling to both legitimize and even “pre-legitimize” what they do as increasingly they are expected not just to “perform” but, more importantly, to prove they’re performing. The proof of performance is possibly now more important than the performance itself. And where else do we get our ideas about “proof” but from some notion of how the sciences are in the business of proving the rightness or wrongness of theories about reality by way of the “discovery” of facts that takes place in a laboratory environment?

As popular figures in Science and Technology Studies such as Bruno Latour (particularly in his classic Laboratory Life from 1979, co-written with Steve Woolgar) and Donna Haraway (in her essay “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective”) have been teaching us for several decades: these notions about proof and the scientific method do not need to have any grounding in how scientific truth is actually produced or manufactured – it is more about trying to figure out why the continual circulation of a particular cultural belief is necessary. I have come to see that the staying power of this belief about the nature of proof and scientific practice is derived not so much from scholars’ obliviousness or ignorance about these convention-bound processes of legitimation but instead from the importance of maintaining belief in humanism, even though it appears we are just talking about science. A belief about how scientists “discover” truth depends on the related belief that scientists are not affected by the agency of their tools, machines, the outside world, other people (Latour, 1986). This is a belief that is a cornerstone of humanism and thus it is just as much a part of the humanities as it is a part of the sciences, for the prevailing belief in the humanities seems to be that humanists are also not affected by their tools, machines, the outside world, other people. Microsoft Word is simply a tool I use to produce articles and books. Google is simply a search engine I use to discover relevant information. The Graphical User Interface just happens to be the easiest way for me to interact with my computer. Regardless of the constant admonition from administrators to innovate, collaborate, incubate and whatever other entrepreneurial terminology you can think of, at the end of the day our raises, appointments, ability to get jobs, and much else besides, depends on continually manufacturing the illusion of a clear separation between ourselves, others, and the rest of the material world.

It is true that some humanities labs appropriate a  traditional notion of labs from the sciences as a way to continue humanism but they do so under the auspices of innovation – the Stanford Literary Lab, when it was under the directorship of Franco Moretti, is the most well-known example of this as Moretti described the lab’s main project of “distant reading” as one driven by the desire for “a more rational literary history” because “[q]uantitative research provides a type of data which is ideally independent of interpretations” (Moretti, 2003). But, these instances aside, what does a uniquely humanities lab look like – or what could such a lab look like if it did not feel compelled to respond to the aforementioned pressures to perform and “objectively” measure such performance? How could such a lab even creatively make the most of its more limited access to the kinds of resources large science labs depend on and instead embrace what I called above the posthumanities?

The Lab Book: Situated Practices in Media Studies (forthcoming from the University of Minnesota Press and co-written by myself along with Jussi Parikka and Darren Wershler) investigates the history as well as the contemporary landscape of humanities-based media labs – including, of course, labs that openly identify as being engaged – in terms of situated practices – with the digital humanities. Part of the book’s documentation of the explosion of labs or lab-like entities around the world over the last decade or so includes a body of over sixty interviews with lab directors and denizens. The interviews not only reveal profound variability in terms of these labs’ driving philosophy, funding structures, infrastructures, administration, and outputs. They also clearly demonstrate how many of these labs do not explicitly either embody or refute scientificity so much as they pursue 21st century humanities objectives (which could include anything from research into processes of subjectivation, agency and materiality in computational culture to the production of narratives, performances, games, and/or music) in a mode that openly both acknowledges and carefully situates research process as well as research products, the role of collaboration, and the influence of physical and virtual infrastructure. While, outside of higher education, “lab” can now refer to anything from a line of men’s grooming products to a department store display or even a company dedicated to psychometric tracking, across the arts and humanities “lab” still has tremendous, untapped potential to capture a remarkable array of methodically delineated and self-consciously documented entities for experimentation and collaboration that may or may not include an attention to history – though they almost always include an emphasis on “doing” or hands-on work of some kind.

I also view The Lab Book as an opportunity to position the Media Archaeology Lab (MAL) in the contemporary landscape of these aforementioned humanities/media labs. Since 2009, when I founded the MAL, the lab has become known as one that undoes many assumptions about what labs should be or do. Unlike labs that are structured hierarchically and driven by a single person with a single vision, the MAL takes many shapes: it is an archive for original works of early digital art/literature along with their original platforms; it is an apparatus through which we come to understand a complex history of media and the consequences of that history; it is a site for artistic interventions, experiments, and projects; it is a flexible, fluid space for students and faculty from a range of disciplines to undertake practice-based research; it is a means by which graduate students come for hands-on training in fields ranging from digital humanities, literary studies, media studies and curatorial studies to community outreach and education. In other words, the MAL is an intervention in “labness” insofar as it is a place where, depending on your approach, you will find opportunities for research and teaching in myriad configurations as well as a host of other, less clearly defined activities made possible by a collection that is both object and tool. My hope is that the MAL can stand as a unique humanities lab that is not interested in scientificity but that is instead interested in experiments with temporality, with a see-saw and even disruptive relationship between past, present and future, and in experiments with lab infrastructure in general.

From Archaeological Media Lab to Media Archaeology Lab
The MAL is now a place for hands-on, experimental teaching, research, artistic practice, and training using one of the largest collections in North America of still functioning media spanning roughly a 130 year period – from a camera from 1880, a collection of early 20th century magic lanterns and an Edison diamond disc phonograph player to hardware, software and game consoles from the mid-1970s through the early 2000s. However, the MAL initially came to life in 2008-2009 as the Archaeological Media Lab. At that time, the field of media archaeology had not yet become well known in North America and the lab was nothing more than a small room on the campus of the University of Colorado at Boulder containing fifteen Apple IIe computers, floppy drives, and copies on 5.25″ floppy disks of a work I had come to admire very much: First Screening, one of the first (if not the first) digital kinetic poems created by the Canadian experimental poet bpNichol.

I began the lab partly because I wanted to start experimenting with stockpiling hardware and software as a complimentary preservationist strategy to creating emulations such as the one of First Screening that had recently been made available. Without being aware of the very nascent debates in archivist communities that were then pitting emulation against original hardware/software, I wanted to augment students’ and scholars’ access to early works of digital literature and art while also collecting other works and their original platforms in order to eventually make available emulations of these works.

However, I also created the lab because I wanted to bring in small undergraduate and graduate classes to work directly on the machines, with the original work by bpNichol, rather than only study the emulated version. In other words, the lab allowed me to think through with my students the difference the original material, tactile environment makes to our understanding of First Screening.

It was a straightforward enough experiment, but even now in 2017, the implications of this kind of literary/historical work are far reaching and unsettling to the discipline. The foregoing first involves turning away from close reading and from studying literary products (as surface effects), to studying instead the literary production process – looking at how a literary work was made and how the author pushed up against the limits and possibilities of particular writing media. From there, the ramifications of such an approach start to become more obvious as soon as one realizes that learning and teaching “the how” of literary production cannot take place without access to the tools themselves in a hands-on lab environment. That said, while using hands-on work not just as an added feature but as the driving force behind teaching and research is quite new to the humanities, the production-oriented approach to interpreting literature has been around in one form or another since the early twentieth century. As many are fond of pointing out, nearly all foundational media studies scholars (from Walter Benjamin to Marshall McLuhan and Friedrich Kittler) were first literary scholars; moreover, one can read the long history of experimental writers, especially poets, as one that is inherently about experimenting with writing media – whether pens, pencils, paper or typerwiters and personal computers.

Since my academic background is in twentieth century experimental poetry and poetics, the move to exploring the materiality of early digital poetry was a logical next step. Furthermore, once my attention turned to the intertwinement of First Screening with the Apple IIe, it likewise made sense to add to the lab’s collection other, comparable personal computers from the early 1980s such as the Commodore 64 – at least partly to get a sense of why bpNichol might have chosen to spend $1395 on the IIe rather than $595 on the C64. (The answer likely lies in the fact that the IIe was one of the first affordable computers to include uppercase and lowercase along with an 80-column screen, rather than the C64’s 40-column display for uppercase letters only.)

In these early years, I tried to sell the lab to the larger public by saying that it was an entity for supporting a locavore approach to sustaining digital literature – a pitch I also hoped justified our very modest online presence while also underscoring the necessity of working directly with the machines in the lab rather than accessing, say, an Apple IIe or Commodore 64 emulator online. Thus, from 2009 until 2012, the “Archaeological Media Lab” maintained its modest collection of early digital literature and hardware/software from the early 80s and gradually increased its network of supporters – from eBay sellers who had become ardent supporters of the lab, to students and faculty from disciplines ranging from Computer Science, Art, Film Studies, and English literature, to digital archivists. However, 2012 was a turning point for the lab for a number of reasons: first, and most importantly, the lab was given a 1000 square foot space in the basement of an older home on the edge of campus, making it possible for the lab to become the open-ended, experimental space it is today with the largest collections of still-functioning media in North America; second, I renamed the lab the “Media Archaeology Lab” to better align it with the field of media archaeology I was then immersed in; and third, the MAL became a community enterprise no longer synonymous just with me – now the lab has an international advisory board of scholars, archivists, and entrepreneurs which I consult every six months, faculty fellows from CU Boulder, a regularly rotating cohort of undergraduate interns, graduate research assistants, post-graduate affiliates, and volunteers from the general public.

The lab, called the Media Archaeology Lab since 2012, is also now a kind of anti-museum museum in that all of its hundreds of devices, analog and digital, are meant to be turned on and actively played with, opened up, tinkered with, experimented with, created with, and moved around and juxtaposed next to any other device. Again, everything that is on display is functional though we also have a decent stockpile of spare parts and extra devices. The MAL is particularly strong in its collection of personal computers and gaming devices from the 1970s through the 1990s ranging from the Altair 8800b (1976), the complete line of Apple desktop computers from an Apple I replica (1976/2012) to models from the early 2000s, desktops from Sweden (1981) and East Germany (1986), a Canon Cat computer (1987 – I discuss this machine in detail in the following section), and game consoles such as Magnavox Odyssey (1972), Video Sports (1977) , Intellivision (1979), Atari 2600 (1982), Vectrex (1982), NES (1983) and other Nintendo devices. These are just a handful of examples of hundreds of machines in the MAL collection in addition to thousands of pieces of software, magazines, books and manuals on computing from the 1950s to the present as well as the aforementioned analog media we house from the nineteenth and 20th centuries.

A Case Study in Undoing and Reimagining Computer History: the Canon Cat
While I am attempting to illustrate the remarkable scope of the MAL’s collection, I am also trying to show how anomalies in the collection quietly show how media history, especially the history of computing, is anything but a neat progression of devices simply improving upon and building upon what came before; instead, we can understand the waxing and waning of devices more in terms of a phylogenetic tree whereby devices change over time, split into separate branches, hybridize, or are terminated. Importantly, none of these actions (altering, splitting, hybridizing, or terminating) implies a process of technological improvement and thus, rather than stand as a paean to a notion of linear technological history and progress, the MAL acts as a platform for undoing and then reimagining what media history is or could be by way of these anomalies.

The Canon Cat is one of the best examples I’ve come up with of a machine that disrupts any attempt to narrativize a linear arc of past/present/future that supports notions of progress or even notions of regression. This machine was designed by Jef Raskin after he left Apple in the early 1980s and it was introduced to the public by Canon in 1987 for $1495 – roughly $3316 in 2017, the year of this writing. Although the Cat was discontinued after only six months, around 20,000 units were sold during this time. The Canon Cat is a particularly unusual device as it was neither behind the times nor ahead of its time – it was actually very much of its time, albeit a time that does not fit into our usual narrative of the history of personal computing.

First, this machine was marketed as an “Advanced Work Processor.” Although it looks like a word processor, the Cat was meant to be a step beyond both the IBM Selectric Typewriter and conventional word processors. It came with standard office suite programs, a built-in communications device, a 90,000 word dictionary, and the ability to program in Forth and assembly language. While the Cat was explicitly not a word processor, it was also not supposed to be called a “personal computer” because its interface was distinctly different from both the command-line interface and the Graphical User Interface (GUI) that, by 1987, had already become inseparable from the idea of a personal computer. Try to imagine a computer that had no concept of files and no concept of menus. Instead, all data was seen as a long “stream” of text broken into several pages. And so even though the interface was text based (it does not make use of mouse, icons or graphics), its functions were built right into the keyboard. Whereas with a machine that uses a GUI you might use the mouse to navigate to a menu and select the command “FIND”, with the Cat you use the “LEAP” keys.

But before I can explain how LEAP works, I need to explain the remarkable way Raskin designed the cursor, because the cursor is part-and-parcel of the LEAP function. The Cat’s cursor has several states: narrow, wide, and extended. In addition to the variable cursor states, the cursor blink rate also indicates the state of the text.  The cursor blink rate has two states: clean (whereby the cursor blinks at a rate of roughly 3 Hz to indicate that all changes to the text have been saved to a disk) and dirty (whereby the cursor blinks at a rate of about 1 Hz to indicate that changes have been made to the text and they have not been saved to a disk). Leaping, then, is the Cat’s method of cursor movement; you can leap forward and backward using the LEAP FORWARD and LEAP BACKWARD keys.  While the LEAP FORWARD key is held, a pattern may be typed.  While the pattern is being typed, the cursor immediately moves forward and lands on the first character of the first occurrence of the pattern in the text. LEAP BACKWARD behaves the same as LEAP FORWARD except that the cursor moves in the opposite direction through the text. Note that LEAP was, at that time, roughly fifty times faster than the same function on the Apple Macintosh and possibly just as fast as “FIND” is on our contemporary machines.

I have only discussed two features of the Cat – the cursor functionality and LEAP, both of which make it possible to do many more things than we can do today with FIND or control-F or with our generally single-purpose cursor. My point is that, just on the face of it, the Canon Cat disrupts even the most nuanced genealogical accounts of computers and digital devices. Where does a Work Processor fit in the history of computing – a history that nearly always glides seamlessly from IBM Selectric, to kit computers, mini computers, micro computers, word processors and personal computers? More, this disruption only becomes evident when you look not at the Cat’s outward appearance, its style and design, but at its functionality.

It is also important to note the bundle of contradictions and inaccuracies the Cat’s functionality brings to light as they show us the mismatch between what we believe is the history of computing versus the disruptions to this story represented by machines such as the Cat. For example, while, beginning with the Macintosh, Apple may have had an uncanny knack for weaving design into marketing, that certainly wasn’t the case across the board. The design and marketing of computers in the 1980s were not necessarily one and the same as Raskin’s vision for the machine was consistently contradicted by Canon. For example, Canon sold the Cat as a secretarial workstation and therefore represented it in promotional materials as a closed system. While in fact, the Cat was designed not only to integrate with third-party software but it also had a connector and software hooks for a pointing device that could be added on later. Moreover, despite Canon’s efforts to market the machine as closed, somehow Raskin was able to make sure the Cat came with a repair manual and very detailed schematics for how to dis-assemble and re-assemble every single part of the machine.  The Apple Macintosh, by contrast, never came with anything like schematics; in fact, Apple openly discouraged people from opening up the Macintosh and repairing it themselves, in the same way that our Apple devices nowadays are similarly hermetically sealed (Emerson, 2014). Furthermore, while the Cat was consistently marketed by Canon in terms of its speed and efficiency, reinforcing our belief that these are the two markers of progress when it comes to digital technology, Raskin himself seemed to take pride in making heretical statements about how his designs were based on an “implementation philosophy which demanded generality and human usability over execution speed and efficiency” (quoted in Feinstein, 2006). By contrast, every single bit of Canon’s promotional material for the Cat – from videos to magazine ads to the manuals themselves – emphasized the machine’s incredible speed.

A Variantology of Hands-On Practices
The MAL, then, is essential for exploring the functionality of historically important media objects – functionality that cannot be understood in any depth if one only has access to promotional material or archival documents and that fundamentally shapes one’s understanding of the media object’s place in the history of technology. Otherwise put, the lab invites one to reread media history in terms of non-linear and non-teleological series of media phenomena – or ruptures – as a way to avoid reinstating a model of media history that tends toward narratives of progress and generally ignores neglected, failed, or dead media.

I have also come to understand the MAL as a sort of “variantological” space in its own right, a place where, depending on your approach, you will find opportunities for research and teaching in myriad configurations as well as a host of other, less clearly defined activities made possible by a collection of functioning items that are both object and tool. In other words, the lab is both an archive of hardware and software that are themselves objects of research at the same time as the hardware and software generate new research and teaching opportunities.

For example, in terms of the latter, in the last three years the lab’s vitality has grown substantially because of the role of three PhD students who are developing their own unique career trajectories in and through the lab. The results have already been extraordinary. One student, who wishes to obtain an academic position after graduation, has created a hands-on archive of scanners in conjunction with a dissertation chapter, soon-to-be published as an article, on the connections between the technical affordances of scanners and online digital archives. Another student, who wishes to obtain a curatorship after graduation, founded an event series called MALfunctions which pairs nationally and internationally recognized artists with critics on topics related to the MAL collection; this student also arranges residencies at the lab for these visiting artists/critics who, in turn, generate technical reports on their time spent in the MAL; furthermore, as a result of her work with this event series, this student has been invited to be a curator for annual media arts festivals and local museums and galleries. Yet another student, who wishes to pursue a career in alternative modes of teaching and learning, has started a monthly retro games night targeted specifically for members of the LGBTQ community at CU; she also is running monthly workshops teaching students and members of the public how to fix vintage computers and game consoles as well as the basics of surveillance and privacy; as a result of her work, this student was invited to run a workshop at the Red Hat Summit in Boston, MA in Spring 2017.

In sum, the MAL is unique for a number of reasons. Rather than being hierarchical and classificatory both in its display of objects as well as its administrative organization of people, the MAL is porous, flat, and branching objects are organized in any way participants want; everything is functional and made to be turned on. Rather than setting out to adhere to specific outcomes and five year plans, we change from semester to semester and year to year depending on who’s spending time in the lab. Rather than being an entity you need to apply to be a part of or something you can only participate in as a researcher, librarian, PhD student, anyone may participate in the lab and have a say about what projects we take on, what kinds of work we do. Rather than being about the display of precious objects whereby you only ever get a sense of the external appearance or even external functionality of the objects, we encourage people to tinker, play, open things up, dissassemble. Rather than the perpetuation of neat, historical narratives about how things came to be, we encourage an experimental approach to time – put Edison disks beside contemporary proprietary software or put the Vectrex and its lightpen up next to a contemporary tablet and stylus to see what we can learn through the juxtapositions. And finally, rather than participating in the process of erasing the knowledge production process or perpetuating the illusion of a separation between those who work in the lab and the machines they work on and hiding the agency of the machines themselves as well as the agency of the larger infrastructure of the lab, we are interested in constantly situating anything and everything we do in the lab and being self-conscious, descriptive about the minute particularities of the production process for any projects we undertake.

In short, it’s my hope that the MAL can be a tool for moving away from humanism and traditional humanities work and instead tentatively, provisionally model what posthumanities work might look like.

Bibliography
Braidotti, Rosi. (2013). The Posthuman. Oxford, UK: Polity, 143.

Emerson, Lori. (2014). Reading Writing Interfaces: From the Digital to the Bookbound. Minneapolis: University of Minnesota Press, 47-85.

Feinstein, Jonathan S. (2006). The Nature of Creative Development. Stanford: Stanford University Press, 148.

Haraway, Donna. (1988). “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies, 14:3 (Autumn), 575-599.

Latour, Bruno and Steve Woolgar. (1986). Laboratory Life: The Construction of Scientific Facts. 2nd edition. Princeton, NJ: Princeton University Press, 240.

Moretti, Franco. (2003). “Graphs, Maps, and Trees: Abstract Models for Literary History.” New Left Review 24 (November-December), 67-93.

 

now out: “Anarchive as technique in the Media Archaeology Lab | building a one Laptop Per Child mesh network”

I am thrilled to have had the opportunity to co-author a paper, titled “Anarchive as technique in the Media Archaeology Lab | building a one Laptop Per Child mesh network,” with a remarkable PhD student I work with, libi striegl. It is now out as open access in the inaugural issue of The International Journal of Digital Humanities (edited by Thorsten Ries) and discusses how the Media Archaeology Lab acts as both an archive and a site for what the authors describe as ‘anarchival’ practice-based research and research creation. ‘Anarchival’ indicates research and creative activity enacted as a complement to an existing, stable archive. In researching the One Laptop Per Child Initiative, by way of a donation of XO laptops, the MAL has devised a modular process which could be used by other research groups to investigate the gap between the intended use and the affordances of any given piece of technology. Please read and enjoy!

What and Where is the Interface in Virtual Reality? An Interview with Illya Szilak on Queerskins

It’s been four years since the publication of Reading Writing Interfaces (University of Minnesota Press 2014) and admittedly, to my ears and eyes, the first chapter on gestural and multitouch interfaces already seems outdated – at least outdated in terms of specific tech if not in terms of the general principles. If I were going to write about interfaces that are dominating or seeking to dominate the consumer and creative markets in 2018, instead of gestural or multitouch devices I would surely have to write about Voice User Interfaces (VUIs) such as Amazon Echo, Google Home or Apple HomePod – those seemingly interface-less, inert boxes of computing power (for some reason, always coded as women – harkening back to a mid-20th century notion of the female secretary as self-effacing, perpetually amiable, always helpful, always eager to please) embedded throughout homes that just sit there waiting to respond to any voice command and appearing less as a computer and more as an artificially intelligent in-house butler.

But I would also have to write about the growing inevitability of Virtual Reality systems such as Occulus Rift, Sony PlayStation VR, HTC Vive, Google Daydream View, Samsung Gear VR, and I’m sure many more. For me, the problem with accounting for the interface of VR is more difficult to understand than VUI devices, partly because VR poses two problems: for one, the interface that dominates most of the waking hours of the creator is entirely different from that of the user (god help the VR designer who has to design a VR environment in Unity with a headset on); the second problem is that, on the user side, it’s as if the interface has been displaced, moved to the side, outside of your vision and your touch, at the same time as you’re now meant to believe you’re inside the interface – as if your head is not in front of the screen but rather inside it. There is still a screen (even though it’s now inside your headset) and there is still a keyboard and mouse (you have to have a PC to use a device such as Occulus Rift so both the keyboard and mouse are presumably somewhere in the room with you as you play) but all these key components of the Keyboard-Screen-Mouse interface have been physically separated and then added on to with hand-controllers. The way in which the interface for VR users has been physically removed to the peripheries of the room in which the user is stationed is, I think, quite significant. If an interface is the threshold between human and computer, any change in either side of the threshold or the threshold itself (whether it’s a change in functionality or in physical location) is bound to have a profound effect on the human. In the case of VR, the physical changes in the location of the interfaces alone are enough to fundamentally change the human user’s experience as they are now standing up and mobile to an unprecedented degree at the same time as this mobility has nothing to do with exploring the affordances of the interfaces themselves or their affordances – the mobility is entirely in the service of exploring a pre-determined, usually carefully controlled virtual environment.

One of the recurring issues I raise in Reading Writing Interfaces is that of invisibility – especially the danger in interfaces designed to either disappear from view or distract us from the fact that we have no understanding and no access to how the interface is shaping and determining what and how we know, what and how we create. As I wrote in the introduction, “Despite our best efforts to literally and figuratively bring these invisible interfaces back into view, either because we are so enmeshed in these media or because the very definition of ideology is that which we are not aware of, at best we may only partly see the shape of contemporary computing devices.” However, as I argued in my book and as I still believe, literature and the arts are built to take on the work of demystifying these devices and these interfaces and making both visible once again. Thus, while I think the fundamental problem with VR as I describe it above is that users are becoming even more estranged, even more alienated from whatever lies behind the glossy digital interface (in fact, now the estrangement is both literal and figurative as the computer producing the VR experience is potentially as much as twelve feet away), I have already noticed that writers and artists are taking this challenge on. This is precisely why I wanted to interview Illya Szilak so much on the work of interactive cinema, “Queerskins,” she’s creating with Cyril Tsiboulski for the Occulus Rift. Szilak is a long-time active participant in the digital literature community as both a writer/creator and a critic (writing for the Huffington Post). Her transmedia novel Reconstructing Mayakovsky was included in the second volume of the Electronic Literature Collection. Her latest piece, Queerskins, is described on the Kickstarter page for the project as “a groundbreaking interactive LGBTQ centered drama that combines cutting-edge tech with intimate, lyrical storytelling.” Below are my questions and her answers – enjoy!

*

In the process of writing and creating Queerskins with Cyril Tsiboulski, where do you locate the interface in virtual reality systems such as the Occulus Rift? Is the interface different for you as writer/creator than it is for the reader/user?
The interface for us is between reality and virtuality. The hardware of VR is what allows us to navigate that. Of course, books, too, do that, but, without going into a scientific or philosophical discussion of “presence,” suffice it to say, that the experience of being in another world is a primordial mechanism for organismal survival that relies on a motor map of interactivity. Reading may create a conceptual version of this, but your body does not experience it in the same way. For me it relates theoretically to Marinetti’s total theater especially his manifesto on Tactilism in which he says: “The identification of five senses is arbitrary, and the identification of five senses is arbitrary, and one day we will certainly discover and catalogue numerous other senses. Tactilism will contribute to this discovery.”

I think machine extended bodies have the potential to learn new forms of sensing “reading” the world and VR will certainly be used for this transcendence. We are also interested in producing a kind of Brechtian estrangement, so that this transcendence is always in dialogue with the realities of embodiment; even VR hardware requires a body to perceive.

In Queerskins, we are using a variety of techniques to manufacture an aesthetic and narrative sweet spot between reality and artifice. And, we use a variety of technologies and techniques to do this. It was important not to make this into a seamless whole but to leave spaces between so that the user is able to attain some amount of critical distance; so we are combining a 3D modeled car interior created from photogrammetry, 360 video landscapes, 3D scanned and CGI objects and animations and 3D volumetric video live action.

As with all our narratives, we want to explore the tension between material, historical, embodied realities and virtual realities which includes, in the case of VR, not just 3D immersive environments but also, as with our online narratives, harnessing user imagination and memory. For me, ethics are linked to the material and historical. The lived realities (this time–this place) of LGBTQ people can’t be wished away. So, it was really important to use historically accurate objects. Almost every object in the box and many of the sounds are archival – bought off  eBay and 3D scanned or, in the case of sound, recorded by others in different environments or found on the Internet Archive.

At the same time, we recognize the perhaps quintessentially human desire for transcendence: through love, technology, sex, religion, writing, art, imagination, memory and storytelling itself. So, we love the incredible possibilities that VR generates in that respect (we have a plan for three more episodes for queerskins: a love story and transcendence becomes a more and more complicated and innovative part of this though always connected back to material/ historical realities). In this episode, we are more interested in the gravity of the experience. A young man has died of AIDS, essentially abandoned by his parents. So, no, you don’t get to fly or move mountains. You will be allowed two transcendent moments which we are calling memory spaces. In them, there is a change of place and time. In the first, you can get up and walk through a cathedral, but all you will find are just the sounds and images of Sebastian’s everyday life – in other words, memories.

These moments of transcendence are differentiated from the emotionally wrenching reality of the car ride both aesthetically and through the user’s sense of space and  agency. These latter elements are, of course, key narrative devices in VR. VR is a spatial medium, I think that in this medium is more so than film; it was important for us to create a situation that could be read “body to body” because we wanted to hook into the old brain of motor neurons to create emotional responses to the environment. For the most part, the visitor (that’s what they call user or reader now) is stuck in the car with no way to move (in fact, we will seat belt them into the chair which we had created for filming the actors in front of green screen) behind the two actors. So, we actually worked with theater director, choreographer and Butoh dancer, Dawn Saito, to choreograph the two actors’ gestures in a kind of missed call and response.

Again, staying with gravity and materiality and mortality, and to maintain immersion – because we need the user to be absolutely present for the emotional bloodletting happening in close proximity in the front seat, it was important that users not be distracted by having to learn new actions and that their interactions be of the everyday variety; so you can pick up things, you can turn the pages of a diary, you can walk.  (In later episodes you will be able to play music on a virtual lover’s body or walk on the ceiling and have kaleidoscopic housefly-vision, and make the statues of a church come to life…) But for this one, well, hey, you are walking and moving and doing and alive and, that, at least, is better than dead. Limiting user agency here is purposeful. We want you to feel the loss: you can not speak, nor write, nor communicate in any way, just as Mary-Helen can not tell her son that she is sorry or that she loved him.

We are really interested in sound because not only is it spatial, it is can be used to harness the user’s imagination – very old school transcendence and also very much associated with older technologies like radio and text. So, Queerskins starts out with credits and sound. (Skywalker Studio is doing sound post-production and audio design for us.) You begin by imagining the story. This cues users subtly to their role. You are the co-creator. You get pieces of information and have to put together the story and most importantly you come up with an  idea of who the man who died was and the life he lead. When you finally hear his voice speaking his own intimate diary entries, he may or may not be what you thought he would be. The diary is in a sense the missing body. You will find it on one side of you on a pew in the cathedral memory space. On the other side is Sebastian’s empty funeral suit. (Which is the queerskin? )

Can you describe a few creative possibilities opened up by the software/hardware you’re using?
Depthkit was used for filming the actors – it’s basically software that processes data from a high end DSLR video camera to create a texture map and a Kinect which creates a volume map and fuses these to create live action 3D volumetric video. It allows us to have actors in a CGI environment which gives the user a sense of “being there” much more than 360 video does. The 360 video outside the car has a flatness, a nostalgic aesthetic that we actively sought (we flew to Missouri and drove around rural areas ten hours a day with a stranger I met on FB) after we did initial experiments. It looks like old rear screen projection or like you are traveling through a home movie. It’s “real” in the sense that documentary video is “real” but you don’t feel like you are really in the space with your body.

Can you describe a few limitations you’ve faced by the same software/hardware? How has it shaped or determined what you’re able to create?
We spent a lot of time and money making sure we could actually get the actor footage into the CGI car in a way that looked realistic. That being said, it is not perfect– there is flare around the edges and the kinect reads everything in the same plane so we had to make sure that actors didn’t cross over into each other. As iI said–we were already choreographing the actors, so this just became part of the choreography. The 360 video shakes because we shot from a car –we are removing that but will also be using haptics–a motor in the seat the visitor sits on, to make the user’s body feel like it is shaking. In VR, a lot of this comes down to $. There are alternatives which would cost a lot more. But, then, for us, part of this is working aesthetically around the limitations. Also, optimization in the game engine is an issue, we have had problems with frame rate drops that puts our audio out of sink with video. These projects are so complicated that sometimes you don’t know what exactly is doing this. However, frame rate drops are not an opportunity for experimentation like some other hardware limitations. THat is a failure. These are limits of aesthetic expectations and our hardwired senses–we can’t really play with that in this piece. So, Cyril has had to play with the script a lot to decrease frame rate drops.

How, if at all, do you want your reader/user to be aware of the VR interface?
We had to wrestle with whether to let the user get up and walk because this would certainly disrupt the cohesiveness of the experience (the user might need some prompting and direction and will need to be led back physically to the seat) but, in the end, we decided the agency and sense of freedom this afforded (a refuge of sorts) was worth it. Moreover, this episode like all planned episodes is part of an interactive physical installation. They can act separately, but together provide an richer experience. The installation for this episode is a performance art “game” that we will install with the VR piece. One other thing of interest: we are hoping to use Leap Motion for the haptics, not Oculus Touch. Leap Motion is controller-less – Cyril saw it in a Laurie Anderson piece at Mass MOCA and we are working with the developers. It is incredibly natural feeling as your hands appear virtually. Leap Motion recognizes gestures, so the interface in this case really disappears but for non-gamers it means there is no learning of controllers; for us this is optimal especially given a film audience at first.

Media Archaeology and Science Fiction

Benjamin Robertson and I are very pleased we had the opportunity to co-author this piece on the connections between the Media Archaeology Lab and science fiction for the “Notes and Correspondence” section of Science Fiction Studies – thank you to Lisa Swanstrom for inviting us to contribute!

*

The motto of the Media Archaeology Lab (MAL) at the University of Colorado, Boulder is: “the past must be lived so that the present can be seen.” As a lab, rather than a museum, the MAL prides itself on allowing, encouraging, visitors to turn machines on, to play with them, to find out how they work. Following from the assumptions of media archaeology, which provides the lab with its name, the MAL challenges facile histories of technology and computation by demonstrating how our current media ecology came to be not by way of a progress from simple to complex or from primitive to modern. Rather, it derives from strategic, profit-oriented motives paired with conscious choices about the way technology ought to operate (and, of course, these choices are in part determined by the affordances of the technologies extant at the time they are made). By drawing attention to the foregoing, the curation and exhibition of media in the MAL points to how our present moment could have been dramatically otherwise.

This “otherwise,” however, can be quite difficult to understand or visualize, and it’s here that the MAL and its mission enjoy a curious and productive relationship with science fiction. After all, it’s become quite commonplace amongst science fiction critics and readers to acknowledge that sf has far less to do with any actual future it would purport to represent than with the present in which it was written. More precisely, the future that a given work of sf describes will invariably be based upon assumptions its writer makes, assumptions determined by the historical, cultural, economic, technological, and social milieu in which she wrote. Excavating this milieu, however, can be challenging given that such excavation can only take place from within a new milieu that certainly derives from the old one in part, but also departs from it based upon historical events the old one could not predict from its own perspective.

Although the reference will be familiar to many readers of Science Fiction Studies (perhaps to the point of banality), William Gibson’s Neuromancer provides an excellent example of how the prescience of the best sf is always tempered by the limitations of historical situatedness. Famously, the novel begins, “The sky above the port was the color of television, tuned to a dead channel.” At once, Neuromancer announces the death of the 1980’s dominant medium even as it acknowledges the reality of this dominance. Otherwise put, the novel is able to predict the end of television as the developed world’s dominant medium precisely because television was the developed world’s dominant medium at the time it was written. For all of this prescience, however, and for all of the rest of the novel’s prescience about multinational capitalism, the viral spread of subcultures, the importance of networked communication, and more, it gets quite a bit wrong. It does not seem to predict the significance of the graphical user interface. It certainly does not hint at the rise of the smart phone or the development of the mobile web or the rise of the isolated app. For that matter, it does not predict the browser-based online ecosystem replaced by the app ecosystem. In fact, it’s possible to read Gibson’s most recent novel, The Peripheral, as his attempt to rewrite Neuromancer in the context of the rise of the smart phone.

We do not mean to criticize Gibson specifically or science fiction generally for any failure, but only to point out that our visions of the future will always be limited by the historical moment in which we develop them. The Media Archaeology Lab not only understands this limitation, but celebrates it by way of its collection of historical, working, computers. It possesses:

  • an Altair 8800b from 1976, an eight-bit computer which operates by way of switches and outputs by way of a series of LED lights;
  • an Apple Lisa, the first “affordable” personal computer to make use of a graphical user interface (although the $10,000 price tag, in 1983 dollars, stretches the concept of affordability to its breaking point);
  • numerous Mac Classics, the descendant of the machine that was to ensure that 1984 would not be like 1984;
  • and several NeXTcubes, shepherded onto the market by CEO Steve Jobs during his exile from Apple in the early 1990s.

In total, the MAL houses over 35 portable computers, 73 desktop computers, 22 handheld devices, and 13 game consoles in addition to a substantial collection of digital and analog media extending back to the late nineteenth century. Additionally, the MAL collects manuals on early office technologies, operating systems, and software; books on the history of computational media and early humanities computing; and computer magazines and catalogs from the early 1970s through the 1990s.

When one encounters the collection, or the individual items that comprise it, one encounters a concrete past rather than a speculative future. At the same time, one encounters the dreams that past had about its future, dreams expressed in the tools that would finally build it. Far too often, however, these dreams become too solid. That is, we see only what did happen, and take it for the only thing that could have happened. Gibson certainly seems to have predicted the rise of networked culture, but we do a disservice to ourselves and become irresponsible critics and historians if we do not acknowledge and struggle to understand how he was wrong even about that which he got “right.” Likewise, we misunderstand the MAL’s collection if we see in it only the prehistory of the present moment. Certainly, many of us “know” how things came to be the way they are as well as key concepts and technologies that paved the road from past to present: ARPANet, GUI, Apple, Windows, WWW, email, Internet, iMac, cell phone, cell phone camera, smart phone, iPad, net neutrality, etc. There can be no doubt that this history involves a great deal of “lock in,” a great deal of determinism. As people used, for example, GUIs, they created software for GUIs and hardware that could run it. They developed the GUI itself and taught people to interact with computers in this way rather than another way—to the extent that any other way become nearly impossible. Now, for most everyone, opening the terminal is impossible in fact and terrifying in theory. This determinism offers up the present as the inevitable consequence of the past and thus it also offers up the present state of affairs as “natural,” even though the past could not have foreseen the current state of affairs. The past, in fact, got many things “wrong,” just as did Gibson. And, as with everything that turned out to be “wrong” about Neuromancer, past mistakes of technology in many ways are more interesting than what turned out to be right.

For example, and by way of conclusion, one of the most interesting items in the MAL collection is a Vectrex, a complete home video game system developed in the early 1980s during the video game boom. The Vectrex was “complete” insofar as it not only included the hardware necessary to run games and the controllers necessary for humans to interface with this hardware/software system, but also the display itself. In fact, this display was, uncommonly if not uniquely at the time, itself a means by which the user could interface with the Vectrex and its games, by way of a light pen. Rather than displaying pixels, which are at the basis of most contemporary displays, the Vextrex’s monitor makes use of vector graphics. Although the technological differences between these two conceptions of output display are interesting, more important here are the assumptions, even the philosophies, behind the two technologies. Whereas pixels construct wholes out of parts, vectors start with wholes. In the former case, the more parts you can fit onto a screen, the better the resolution of the final image will be. However, no matter how many pixels the screen displays, the parts will always become visible at a certain level of magnification and thus become blurry. The industry response to this blurriness involves packing more and more pixels onto the screen, an arms race of sorts that requires increasing amounts of resources to stay ahead of the curve. By contrast, vector graphics solve the problem of magnification by their very nature, albeit at the cost of color and a certain type of complexity.

vectrex_01

Whether vector graphics—which readers might remember from such stand up games as Asteroids and Tempest—could have ever solved the problems of their inherent limitations is impossible to know. Raster graphics “won” the competition, although suggesting that there was a competition at all is somewhat disingenuous. No other gaming system or general computing system seems to have taken up the cause of vector graphics. As such, the Vectrex seems to us now nothing but a mistake, a dead end—quirkiness and interestingness notwithstanding. However, viewed from another angle—one that we in the present only dimly perceive—the Vectrex suggests an entirely different future. This future is one determined less by a quest for more power, more resolution, more as a good in itself. Rather, it is one that involves a fluid movement and elegance current computation cannot hope to achieve. What cognitive estrangements, what conceptual breakthroughs, what utopias or dystopias such a novum might have produced we leave to the sf writers to imagine. Perhaps we might see someday the advent of vector punk. Regardless, the MAL invites the historians, the critics, the archaeologists to think of the past in terms of its multiplicity, in terms of all of the positivities it contains and not simply those that produce that narrow thing we call the present.

 

 

As If, or, Using Media Archaeology to Reimagine Past, Present, and Future

Below is an interview Jay Kirby conducted with me that’s been published in a special section, titled “Media Genealogy” and edited by Jeremy Packer and Alex Monea, of the International Journal of Communication 10 (2016). I’m grateful to Jay, Jeremy and Alex for all they work they did to put this issue together.

*

Abstract: Jay Kirby, PhD student in the Communication, Rhetoric, and Digital Media program at North Carolina State University, conducted this interview with Associate Professor Lori Emerson to focus on her research about how interfaces and the material aspects of media devices affect our uses and relationships with those devices. Emerson, who runs the University of Colorado’s Media Archaeology Lab, explains how we can look at older technology that never became an economic success to imagine what could have been and reimagine what is and what could be. In the Media Archaeology Lab, Emerson collects still-functioning media artifacts to demonstrate these different possibilities. In this interview, Emerson draws on examples from digital computer interfaces, word processors, and other older media to show how their material aspects are bound up in cultural, commercial, and political apparatuses. By bringing these issues to light, Emerson shows how a critical eye toward our media can have far reaching implications.

Keywords: media archaeology, interface, design, Michel Foucault, Marshall McLuhan

Jay Kirby: The first thing I wanted to do is to get a sense of your use of media archeology when you are looking at media. What do you find valuable about the archaeological method? In particular, I would like to know, first, how the archaeological method informs your research and, second, how that might inform your curation of the Media Archaeology Lab.

Lori Emerson: In my writing, teaching, and work in the lab I am often looking for ways to undo or demystify entrenched narratives of technological progress. It’s a bit cliché or tired in the media studies world, but those narratives are so ingrained in our culture that I think all of us have a hard time seeing through what amounts to an ideology. Happily, I’ve found there is a recursiveness to media archeology that allows me to continually cycle back and forth between past and present as a way to imagine how things could have been otherwise and still could be otherwise—it’s a fairly straightforward technique for unsettling these entrenched narratives. Moreover, using media archaeology in this way is not a conventional way to undertake history, but rather it’s a way of thinking you can mobilize to critique the present. I’ve noticed that as media archaeology becomes better known and gains more purchase in academia, scholars who work on media history of any kind call it “media archaeology,” and often their notion of “history” is something quite different from the Foucauldian/Kittlerian lineage of media archaeology I’m invested in.

But also—to get at the second part of your question about the Media Archaeology Lab (MAL)—while it’s perfectly effective to write conventional scholarly pieces on media archaeology, over the last couple of years, as the MAL has expanded and matured, I’ve found that undertaking hands-on experiments in the lab with obsolete but still functioning media from the past is perhaps an even more direct technique for breaking through the seductive veneer of the new and the resulting pull we feel to quickly discard our devices for something that’s only apparently better. New devices are only better if speed is the primary criterion for progress. But what about a machine like the Altair 8800b from 1976? As I ask my students when they come to the lab for the first time, is the Altair really just a profoundly limited version of contemporary computers? Undoubtedly, this eight-bit machine that operates with switches and whose output is flashing red LED lights is slow and difficult (or, just foreign) to operate, but, for one thing, for almost anyone born after the mid to late 1970s, operating this machine in the lab is likely your first direct experience of computing at the level of 1s and 0s. All our contemporary devices are constantly computing 1s and 0s, but we’ve become utterly estranged from how these devices actually work because they’ve been carefully crafted to seem as unlike computing with 1s and 0s as possible. So, my sense is that as you use a machine like the Altair, your contemporary laptop gradually loses its aura of magic or mystery and you start to palpably experience the ways in which your laptop consists of layer upon layer of interfaces that remove you ever more from the way your computer actually works. For another thing, more often than not, using the Altair opens up the possibility for reseeing the past—what if the computer industry took a slightly different turn and we ended up with Altair-like devices without screens or mice? And therefore using this obsolete machine also opens up the possibility of reseeing the present and the future—if we no longer passively accept what the computer industry gives us, what could our devices look like? What do we want them to do?

Jay Kirby: One of the things that strikes me about your work is your examples of interfaces. In Reading Writing Interfaces (Emerson, 2014), you use examples such as Emily Dickinson’s fascicles or typewriter poets. This selection seems to be outside the dominant history and perhaps constitutes a minor history. In this sense you undo assumptions of progress because we are looking at these minor histories that existed but that weren’t played out.

Lori Emerson: Yes, I think you’re right. But I’ve also discovered that, for some reason, concrete poetry is now taught in some form or other in high schools across the U.S. What’s not taught is how these poets were not creating poems of self-expression or poems for close reading—they were showing us how to use and misuse writing media. And of course, Dickinson is far from a minor poet, but, just as with the concrete poets, Dickinson’s wildness is often elided or reduced to cute aphorisms we memorize or close-read.

Jay Kirby: So, when you choose technologies to curate in the lab, is your choice based on how the technologies are part of a minor history, or is it based on how they are misunderstood in the same way as Dickinson and concrete poets?

Lori Emerson: Now that I think about it, I don’t see the oddities in the lab as minor or peripheral in the history of computing. I think of them—and I just recently came across this term from geology—in terms of their place in a branching phylogeny of technological devices. In this way, the Altair 8800b represents a branch off the main line, and it is peripheral only in the sense that it wasn’t an economic success. But certainly, for most people visiting the lab, their initial tendency is to marvel at how “primitive” the machines are, or even how ridiculous or impractical they are. At that point, I try to encourage visitors to slightly reframe their experience from imposing the present on the past to instead experiencing the friction that exists between our present-day interactions with these machines and the way the producers originally imagined and even prescribed our interactions. For example, the manuals in the lab for the Apple Macintosh, released in 1984, describe in minute detail, over many pages, how to double-click, how to you train your finger to click very quickly, and what a window or a file is. Reading the manuals is akin to visiting a foreign land but from the obverse insofar as the manuals defamiliarize where you already live. All of the sudden you start to think, “Oh wow, clicking is not a natural gesture; there was a moment when people really had to think consciously about this gesture and train their bodies to adapt to this physical action.”

Jay Kirby: Now it doesn’t seem that way at all, I guess because double-clicking has become so ubiquitous.

Lori Emerson: I think so.

Jay Kirby: I’d like to talk about power in relation to these technologies. How do you see the relationships between power and knowledge in the creation of these interfaces? Who are the players, and what happens when the interface is either present, as you talk about early on in your book, or absent or transparent, as with later interfaces?

Lori Emerson: What do you mean by players? Do you mean people or technology?

Jay Kirby: I like to think of them on somewhat equal levels. When an interface is being designed, who or what influences decisions? And how do those decisions rearticulate relationships between knowledge and power?

Lori Emerson: When I was doing research for my book, I became fascinated with interfaces from the 1970s, especially ones related to SmallTalk and the Xerox Star, that were teetering right on the precipice of being designed for the novice as well as the expert. Now, I have never had the opportunity to actually use a Xerox Star—they are incredibly rare and most of them are in museums now—so I had to piece together my understanding of this machine by looking at manuals, magazines, and screenshots from the 1980s. But it seemed to me that interfaces like the one in the Star opened up possibilities for us not to have to live in the either/or scenario of being a user or an expert. This binary was a marketing ploy, advanced especially by Apple, to make people believe that you could only ever have a machine that was either for one or the other, and since most people identified as novices, so the logic went, your only choice was to buy a “user-friendly” Macintosh. Apple made the underlying workings of the Macintosh inaccessible or invisible so that you would never know how it worked. Moreover, Apple tried to nudge you into thinking that you’d never need to know.

Jay Kirby: So, it was a marketing and design decision to create an interface that made the underlying mechanisms invisible, as a way to create a false division between novice and expert?

Lori Emerson: Yes, I think so. There were interfaces proposed in the late ’70s that allowed those two groups, the experts and the novices, to use the same machine; the novice could use the ready-made tools included in the system, while the expert had the ability to create their own tools or even create tools to create more tools. But, to get back to your question about the relationship between power and knowledge, I want to make clear that the design and choice of interface is not a minor technical detail—it’s not just that interfaces could have been otherwise, but instead that interfaces determine how and what you create on your machine, and the choice of one over another opens up or forecloses on possibilities.

Jay Kirby: Interfaces rearrange the relationships between power and knowledge.

Lori Emerson: Yeah. While there’s no doubt that Apple had its eye on the untapped market of the novice user, in order to maintain their monopoly on this market share over the long term, they had to design an interface that was not just easy to use but that also disempowered the user so they eventually came to think there was no need for them to understand how their machine worked or how it was acting on them, rather than them acting on their machine. And of course, developing this mind-set in consumers has had long-term, cross-generational repercussions as these “user friendly,” out-of-the-box machines found their way into homes and schools and became the first computer that many children used.

Jay Kirby: I am curious about your conception of how media technology, the interface, and the human interact. You drew on Marshall McLuhan in your book, but I felt as if Friedrich Kittler was also present. I’ve always read them as being, to a certain extent, opposed, where McLuhan seems to have the user extended through media and Kittler seems to posit media as something imposed on the user.

Lori Emerson: Kittler doesn’t come into my book obviously, but he’s very present in terms of how I’m thinking about media poetics and about rereading the history of experimental 20th and 21st century writing as histrionics of media, as expressions of the histrionics of media, as Kittler puts it. Kittler helped me read these strange photocopies of photocopies of photocopies by concrete poets from the 1960s and 1970s not for what the blurred text says but for how these texts are recordings of media facts. McLuhan was more obviously useful for the chapter on concrete poetry because he so clearly influenced and was influenced by these poets; he was one of the first to mesh together literary and media studies to argue that poets are “probes” into the limits and possibilities of writing media. I’ve never seen McLuhan and Kittler as incompatible, and I have to admit I sometimes think intellectually lazy simply to claim that McLuhan was anthropocentric and Kittler was not.

Jay Kirby: Right.

Lori Emerson: Just in the last couple months, probably from teaching McLuhan for the 12th or 13th time, I’ve come to see that McLuhan and Kittler are much closer to each other than you might think. McLuhan does say that media first act as extensions of “man.” But if you just combine the two famous McLuhanisms, “media are the extensions of man” (McLuhan, 1994, p. 4) and “the medium is the message” (pp. 7–8), you can see there’s a strange hinge moment where media first extend certain human capabilities, but then they turn back on the human and shape the human. McLuhan’s entire theory of how media work falls apart if media don’t come back and shape humans. I understand that his entire system for understanding media begins and ends with humans, but at the same time he knows that each medium plays a fundamental role in determining what you can do and how you can do it. Kittler, to me, comes in at that hinge point and just follows the line of thought extending from the medium to the human.

Jay Kirby: You’ve already mentioned in our discussion the notion of user-friendliness, which seems to illustrate one part of this mutually influential relationship between humans and technology—the way in which design decisions determine how we use our computers, which in turn shape us as users. In Reading Writing Interfaces you note a shift in what user-friendly means. Why do you think this shift occurred?

Lori Emerson: As I mentioned briefly earlier, I think most of it had to do with economics. How long were we going to go without trying to make personal computers as profitable as possible?

And the minute you try to make them profitable, you are also going to have to standardize them, which involves creating a notion of the standard user who needs their computer to be “user-friendly.” I’m not sure anything like a standard user exists—it was created by companies like Apple through persistent and clever marketing to convince people they should identify as standard users. By contrast, in the ’70s, when the computer was not yet very profitable and it was still a niche market item for tinkerers and the curious, it was marketed in more philosophical terms. My favorite ad from that era is for Logo, a learning-oriented programming language. In an issue of Byte magazine from 1982 you can find an ad that describes Logo as “a language for poets, scientists, and philosophers” (Logo, 1982). Incredible! At this time, computers were more about learning and creativity—open-ended learning and creativity.

Jay Kirby: This idea of moving from open-ended play and creativity into something more limited makes it seem as if there is some sort of power constraining us. Michel Foucault discusses this limiting and controlling aspect of power, but he also says there can be a productive element to the exercise of power. Do you think the shift away from open-ended play and creativity is entirely negative?

Lori Emerson: People will always find a way to be playful and creative with the tools they’re given. In terms of the shift toward user-friendly design, I think every technology should steer clear of calling itself user-friendly because of the way that term is now associated with disempowering users. Without user-friendly design, we would never be able to type. Or, perhaps I should say that even though a keyboard design such as QWERTY is not the most efficient, its utter ubiquity has turned it into a kind of user-friendly design. Also, importantly, QWERTY does not disempower users so much as it slows down their typing. The QWERTY keyboard works well because it has become naturalized and invisible as a result of its ubiquity, so you no longer have to think about the act of typing itself. So the user-friendly does have some value, but, to go back to my earlier point, that value is lost once the user-friendly disempowers us and once it’s leveraged against us through the creation of a false binary between the novice and the expert user, between the creation of a machine that’s easy to use and one that allows you to build more tools. The interfaces from the 1970s that I talk about in my book show this binary isn’t necessary and it wasn’t necessary for a while.

Jay Kirby: Maybe your last point can return us to the question of what changed. You mentioned the economy. But is there something we should be doing, perhaps through pedagogy, to help people look at interfaces differently?

Lori Emerson: Good question. That is what I use the Media Archeology Lab for. I sit people down at, say, an Osborne I computer, and I invite them to use WordStar, which is entirely text-based and requires you to use about 90 different commands. Next, I ask them to read WordStar against Microsoft Word so that they can begin to actually see how other word processors have different or more capabilities than Word, and hopefully they begin to realize Word isn’t natural—it isn’t the only, or even the best, word processor. There are other ways you can process your documents and have very different, creative results. So to me, pedagogically, the best way to get students to think critically about interfaces is to read the past and the present against each other.

Jay Kirby: I wonder whether we short-circuit some sort of learning if you use an interface you immediately understand?

Lori Emerson: Is there such a thing as an interface you immediately understand?

Jay Kirby: I don’t know. I remember The New York Times ran a story about technology executives sending their children to the Waldorf School, which does not use computers (Rictchel, 2011). The idea was that children should experience certain types of learning without the computer interface. Do you think these more transparent interfaces can short-circuit learning?

Lori Emerson: I’m guessing the Waldorf Schools recognize that primarily what’s lost when we use contemporary digital computer interfaces is a mode of learning and processing from print culture. Most of the skills we teach and test in schools are still based on print culture, so in that sense I can understand why one might think it’s beneficial to keep children away from computers in their early years. For me the main problem is not whether learning takes place via digital or analog devices; the problem is the way particular kinds of interfaces become naturalized, when we start to think that there’s only one way to interact with our computers and passively accept whatever the computer industry hands down to us.

Jay Kirby: When I first encountered a computer, it was a command-line interface. It was MS-DOS. Many of my students have never experienced it. Is an experience like seeing a command-line interface helpful for understanding interfaces?

Lori Emerson: Yes, I think so. And I also don’t think that experiencing the command-line interface requires a lot of expertise. I can write out a couple commands on the board and ask students to open up terminal, and all of a sudden they can have that experience. They’re accessing the same information as they might via a graphical user interface, but, through the command line, they can see how a different interface offers an utterly different perspective on the same information. So, yes, I think you should experience the command line. But I also don’t think students need to take years to learn computer programming. Just typing a couple lines of code into terminal can be very revelatory.

Jay Kirby: And many people aren’t going to go and study computer science after that experience. So what does your average person gain from the experience of using the command line?

Lori Emerson: I was recently reading about a famous conversation that took place between Foucault and Noam Chomsky1 that made it clear Foucault was interested in finding ways to denaturalize political discourse. That’s no small thing. It’s no small thing to denaturalize the tools that we use every single day. So, helping the average person to see how much their access to information is determined by mechanisms that they have no control over and that shape their access to knowledge and creation is profound.

Jay Kirby: So there is a political dimension to it?

Lori Emerson: Absolutely.

Jay Kirby: To return to the argument you lay out in your book, you move forward from the command line to graphical user interfaces and, more recently, to gestural interfaces. Each of these developments seems to make the interface more transparent or more difficult to perceive. Do you have to have that transparency—in the more negative sense of removing access to elements of the interface—if you move from the command line to a GUI to a so-called natural interface?

Lori Emerson: No, not at all. That was the point I was trying to make in the second chapter. There are not only other interfaces but also other visual interfaces. It is not necessary to move from command line to graphical user interfaces. It’s just a continuation of a line of thought that has come to dominate computing. Here is an example that I didn’t talk about in my book. The Canon Cat computer was developed by Jeff Raskin—you remember that Jeff Raskin was on the design team for the Apple Macintosh. I think he left in 1982 because of a disagreement with Steve Jobs. Then he worked on the Cat, which Canon eventually bought. Raskin designed the Cat to have an interface that was entirely text based—not command line, not graphical user interface, but text based. This was 1987. He called it an advanced work processor, not a word processor and not a personal computer.

Jay Kirby: What did that look like?

Lori Emerson: It’s this cute little beige computer with a handle on the back for portability. It has no mouse; instead, all the functionality is built into the keyboard. And it has all sorts of unusual functionalities like “leap,” for example, which is a sophisticated version of search and find that we don’t have today.

Jay Kirby: Interesting. These less common computers, as you note, give insight into what could have been. Another element that interested me in your work was that your examples of people who interrogate these interfaces are artists. In a way, artists are also less common. Are there ways that nonartists can or should be interrogating interfaces? How might one cultivate a critical approach to understanding interfaces in an everyday way?

Lori Emerson: The first answer that comes to mind is that tinkering, play, and creativity are open to anybody and everybody. And in fact, creating glitch art is now accessible to anyone. There is glitch software and step-by-step instructions online that show you how you can get into the code of a digital image and glitch it from within, turning it into a Word document or a text document. You can also take any function on your computer and push up against it. Anything. Ask yourself, is it possible to break it? How do I misuse it? What are some ways this function could work that the manufacturer didn’t anticipate?

Jay Kirby: That is a good example of an accessible way to understand interfaces differently—and related to another new development I see in computers and interfaces, which is the surge in popularity of microcontrollers like Raspberry Pis or Arduinos. How do these fit into this archaeological cut between the transparent interfaces of many computers today and these older pieces of technology?

Lori Emerson: I have some in the lab, and I believe they stand as wonderful interventions into this culture of passively consuming software/hardware configurations. Our Raspberry Pi is very small and affordable. You can see how it works, and you can use it to make other computers to build on top of it. But, to complicate what I just said about how tinkering is open to anyone, I still worry about accessibility. Even if the price of a Raspberry Pi isn’t much more than the price of a book, I worry especially about gender and how the culture around the machines may not be amenable to or welcoming for women and minorities. I know there are women, for example, that are incredibly adept at playing with Arduinos and Raspberry Pis, but I don’t know any in Boulder. None have shown up at my doorstep. I have no doubt they exist, but at the same time, I know women are a minority in this community.

Jay Kirby: As these microcontrollers look so different from what we might think of as a computer today, do you believe the aesthetics of these objects play into understanding computers and interfaces? I’m thinking of how these early computers came to us as chunks of metal, versus contemporary devices that are almost all screen.

Lori Emerson: Yes. While I was teaching last week, I was thinking about how we only ever look at screens, and how we are never aware of how there is another world behind them. It’s as if the screen was created so you would only look at it rather than think about its situatedness, its constructedness.

Jay Kirby: I understand. There is even a difference between the old CRT screens that have depth—and even though that is not the computational part, there is the idea there is something back there—versus these iMacs that are sheer screen . . .

Lori Emerson: Yes. Or, think once more about the Altair, how it had no screen and yet it was a perfectly functional computer.

Jay Kirby: Exactly. Or the Arduinos.

Lori Emerson: That’s right. It’s difficult because everything has to be—is this Apple ideology?—everything has to be “light and airy.”

Jay Kirby: Not only user-friendly . . .

Lori Emerson: It can’t have heft, or bulk, or weight.

Jay Kirby: As we talk about what these computers look like and what they do, what do you look for in a piece of technology when you’re thinking about adding it to the Media Archaeology Lab’s collection? What makes a good candidate for the lab?

Lori Emerson: I’m always looking for alternative visions of what could be, anything that is odd and unusual, as well as anything that is ubiquitous. Those two poles. It’s important to have Apple Macintoshes in the lab along with the whole lineage of Apple computers because of how much they’ve influenced the computer industry. At the same time, you have to have the oddities or the outliers for reasons I’ve already touched on. I should also mention we are starting to collect analog media, or any kind of media that archaeologically underlies our contemporary media. For example, we just acquired an Edison Diamond Disc phonograph from 1912 from a used furniture store in Boulder. The phonograph came with 30 discs, and each has a large warning on the outside of the record sleeve that says something like, “You may not use this photograph disc with any other machine other than the Edison. If you do, you will destroy the needle and you will destroy the record.” Once you place this warning beside any contemporary proprietary technology, you see quite clearly that the notion of proprietary technology did not originate with Apple or Microsoft; it has a long lineage going at least as far back as Edison. It’s also utterly American.

Jay Kirby: Yeah. That is really fascinating. I guess at that time the phonograph wasn’t yet standardized.

Lori Emerson: As far as I know, Edison and Victrola were competing not just for the largest share of the market but also to make their respective machines the standard.

Jay Kirby: Perhaps this is a good place to talk about your current project, as you’ve been moving from discussing the standardization of interface technology to discussing the standardization of Internet protocols, in particular TCP/IP. Can you tell us more about what you are doing with this project?

Lori Emerson: Yes, thanks for asking about that. “Other Networks” began with an innocent question Matthew Kirschenbaum asked me at the Modern Languages Association annual convention a couple years ago. He asked me whether I talk about the ’90s in Reading Writing Interfaces, and I said no, I don’t, and immediately wondered why it didn’t seem to make sense to have a chapter dedicated to that decade. I think the reason is because the ’90s are not so much about hardware and software; they are instead more a continuation of hardware/software design principles that had been standardized by the late 1980s. Instead, in terms of digital media, the ’90s are more about networks and the so-called explosion of the Internet.

So with this new project, I wanted to see if I could extend the logic of media archaeology to look at the materialist underpinnings, the ideological underpinnings, of the Internet—to imagine how it could have been otherwise, which then led me into looking into the particulars of TCP/IP, the protocol that allows all the different networks on the Internet to communicate to each other. That in turn led me to dig through manuals and textbooks on TCP/IP and browse the thousands of requests for comments, or RFCs. These are basically a series of online memos recording people’s proposals and decisions to tweak TCP/IP, and, among other things, the RFCs record the development of TCP/IP and its official adoption in 1982 or 1983. What I was trying to do was to trace the economic, institutional, and philosophical pressures that went into creating TCP/IP. At the same time I was also thinking about what other protocols were up for debate and what difference those might have made to our experience of the Internet today. As it turns out, there were alternatives and there still are alternatives, like the network architecture RINA, but my sense is that it’s been difficult to convince people that a new or different protocol might be beneficial because these alternatives wouldn’t make a dramatic difference to our experience of the Internet. I think people want to hear about some version of the Internet that’s completely new and alien and, as far as I know, this just doesn’t exist.

Jay Kirby: So for whom or for what would these alternative protocols make a difference?

Lori Emerson: Well, this computer scientist I have been talking to—John Day, who is at Boston University—his argument is that a different structure for TCP/IP might have made the entire Net neutrality debate moot. He believes that a particular layer in TCP/IP, the transport layer, is flawed. The transport layer is what makes possible the entire discussion about slow lanes and fast lanes, because there the Internet has a longstanding problem with congestion. So, if the designers of TCP/IP had managed to put together a different set of layers and a different configuration—maybe not even layers—there wouldn’t be a congestion problem and we wouldn’t need to have this discussion about Net neutrality.

Jay Kirby: This seems to relate back to the idea of interfaces, too. The interfaces can affect the relationships of knowledge and power. Do you conceive of TCP/IP along the same lines as an interface? Rather than a person interfacing with technology or writing, TCP/IP allows for computers to interface with each other. Is this correct?

Lori Emerson: On the surface there is a perfect corollary to the way TCP/IP is structured and the way interfaces were designed for personal computers, both of which were developed around the same time. TCP/IP is structured according to layers. This model of layers was apparently imported from models for how operating systems were conceived of in the late ’60s, and then it was just carried over from operating systems into networks. However, there seem to be significant differences in how terms like “interface”—and even “black box”—are mobilized in the two spheres. For example, the layers that constitute TCP/IP are separated by what engineers refer to as interfaces, so I first assumed this meant those interfaces function in the same way that an interface does for us as users. It turns out this isn’t the case. What the designers of TCP/IP have done is create interfaces that allow the layers to communicate with each other insofar as one layer picks up the task of conveying bits where the lower layer left off. The interfaces between layers also black box the layers from each other—the idea is that if any one of the layers stops working, the entire system should not be affected because the layers have been separated from each other.

Jay Kirby: This is a positive use of black boxing.

Lori Emerson: Yes, exactly. I understand now there’s a way in which black boxing and layering is sometimes very useful, whereas I had previously assumed that black boxing and layering only insert more barriers to access for the user.

Jay Kirby: This speaks to what you said about how users don’t always want the interface to be present. Sometimes users want it to recede from view.

Lori Emerson: Fade into the background.

Jay Kirby: As a way to wrap things up, what do you believe people should be attentive to when they are using an interface? Or what should people hope for in an interface?

Lori Emerson: I am wary of any system, any interface, that claims to do things for me and doesn’t allow me to either do it myself or to understand how it’s been done for me and then intervene in some way so that I can do it in whatever way I think is appropriate. This patronizing attitude toward the user is harmful.

Jay Kirby: So that’s what people should be wary of. And you’ve implied an answer to the second part of my question, about what you want from an interface.

Lori Emerson: I want an interface that is configurable and flexible according to my needs. It may come with certain defaults, but I need to be able to configure it to do what I want it to do.

Jay Kirby: That’s an interesting idea. Not long ago TIME did their Person of the Year as “You,” by which they meant that the individual can now get whatever they want. But at the same time, there is this idea that someone else will give you exactly what you want. It seems a sort of preemption. Rather than “I want x, y, or z,” companies state that “you want x, y, and z.”

Lori Emerson: Oddly, though, I think both positions usually amount to the same thing, as companies such as Facebook offer you the appearance of a proliferation of choice, the illusion that we can make our experience of Facebook exactly as we’d like it—when, of course, we’re only ever offered predetermined choices. If TIME’s Person of the Year is “You,” then this “you” is a corporately controlled version that leads you to believe you’re somehow an empowered user with the freedom to customize anything and everything.

Jay Kirby: Yeah. It’s the commodification of choice rather than choice as choice.

Lori Emerson: That’s right. Rather than open-ended choice, it’s like choosing between Coke and Pepsi, which really isn’t a choice at all.

Jay Kirby: No more RC Cola.

Lori Emerson: Yeah. And no more Fanta. It’s like the 1980s standardization of the personal computer all over again!

References
Chomsky, N., & Foucault, M. (2006). The Chomsky-Foucault debate: On human nature. New York, NY: New Press.

Emerson, L. (2014). Reading writing interfaces: From the digital to the bookbound. Minneapolis, MN: University of Minnesota Press.

Logo. (1982, February). [Advertisement for Logo]. Byte, 255.

McLuhan, M. (1994). Understanding media: The extensions of man. Cambridge, MA: MIT Press.

Rictchel, M. (2011, October 22). A Silicon Valley school that doesn’t compute. The New York Times. Retrieved from http://www.nytimes.com/2011/10/23/technology/at-waldorf-school-in-silicon-valley-technology-can-wait.html

selling the future at the MIT Media Lab

The following is the text of a talk I gave at Transmediale on February 5, 2016 as part of a panel with Jussi Parikka, Ryan Bishop, and John Beck on “The Persistence of the Lab.” The text of the talk will eventually find its way into THE LAB BOOK.

*

What follows are some of my initial findings as I’ve been researching the past and present of the MIT Media Lab – a lab founded in 1985 by former MIT President Jerome Wiesner and Nicholas Negroponte of OLPC fame and then directed by Negroponte for the first twenty years of its existence so far. The Media Lab has become synonymous with “inventing the future,” partly because of a dogged thirty year long marketing campaign whose success we can measure by the fact that almost any discussion of “the future” of technology is a discussion about some project at the Media Lab.

And of course the lab has also become synonymous with “inventing the future” because of the central role it’s historically played in the fields of wireless networks, field sensing, web browsers, the WWW, and the central role the lab is now playing in neurobiology, biologically inspired fabrication, socially adaptive robots, emotive computing, and the list goes on and on. Given this list, you’d be right to think that the lab has long been driven by an insatiable thirst for profit operating under the guise of an innocent desire to just keep performing computerized feats of near impossibility, decade after decade.

But I’ve also come to see this performance is partly a product of a post-Sputnik Cold War race to out-do the Soviets no matter what the reality, the project or the cost. In Stewart Brand’s The Media Lab, the only book written so far exclusively on the media lab, written in 1986, the year after the lab opened, he writes with an astuteness I don’t usually associate with him:

If you wanted to push world-scale technology at a fever pace, what would you need to set it in motion and maintain it indefinitely? Not a hot war, because of the industrial destruction and the possibility of an outcome. You’d prefer a cold war, ideally between two empires that had won a hot war. You wouldn’t mind if one were fabulously paranoid from being traumatized by the most massive surprise attack in history (as the USSR was by Hitler’s Barbarossa) or if the other was fabulously wealthy, accelerated by the war but undamaged by it (as the US was by victory in Europe and the Pacific). Set them an ocean apart. Stand back and marvel. (161-162)

Brand then goes on to explain how American computer science owes so much to the Soviet space program – one which resulted in the creation of Eisenhower’s Advanced Research Projects Agency which, by the 1970s, had an annual budget of $238 million and funded many labs at MIT. And even when ARPA changed to DARPA to show that all agency projects had direct defense applicability, it still continued to fund labs such as the predecessor to the Media Lab, Nicholas Negroponte’s Architecture Machine Group. Still to this day, even though the Media Lab is famous for its corporate sponsorship, the U.S. Army is listed as one of its sponsors and many of the lab’s projects still do have direct applicability in a defense context.

But the lab is also the product of MIT’s long history of pushing the boundaries of what’s acceptable in higher education in terms of its deep ties with the military industrial complex and corporate sponsorship that goes back even to the 1920s. However, we now know that even though MIT tried to work with corporate partners in the post World War I years as it tried to pay for research programs in chemical and electrical engineering, the depression put an end to corporate partnerships until late in the second world war. The WWII in fact became a decisive turning point in the history of university science/tech labs, likely because of the enormous amount of funds that were suddenly available to sponsor research and development contracts.

Historian Stuart Leslie reports that during the war years MIT alone received $117 million in R&D contracts. So, again, naturally, once the “hot war” was over in 1945, it was almost as if MIT needed a Cold War as much as the state did so that it would continue to receive hundreds of millions of dollars’ worth of contracts. As a result, by the 1960s, physicist Alan Weinberg famously said that it was getting increasingly hard “to tell whether MIT is a university with many government research laboratories appended to it or a cluster of government research laboratories with a very good educational institution attached to it.” (quoted in Leslie 14)

Also in the 1960s, the Research Lab of Electronics (or RLE) in particular was getting the lion’s share of the funding. RLE was a lab created in 1946 as a continuation of the world war II era Radiation Lab which was responsible for designing almost half of the radar deployed during the war. The RLE also became the template for many MIT labs that followed – particularly the Arch Mach Group which, again, turned into the Media Lab in the mid 80s. RLE was one of the first labs to be thoroughly interdisciplinary, to train grad students who went on to write the books that future grad students read and then responded to by writing other books or who went on to fund tech companies, and it also groomed future leaders either of the lab itself, the university, or for government/industry. Given this beautifully efficient system of using labs to both create and replicate knowledge, it makes perfect sense that researchers Vannevar Bush and Norbert Weiner – famous in part for their roles in advancing war-time technology – were teachers at MIT of Julius Stratton, the founding Director of RLE, and Jerome Wiesner who, again, you remember later co-founded the Media Lab.

Wiesner’s life in corporate and government sponsored labs began in 1940 as he was appointed chief engineer for the Acoustical and Record Laboratory of the Library of Congress. His job at the time mostly involved traveling to the South/Southwest under a Carnegie Corporation grant, with folklorist Alan Lomax, recording the folk music of these areas. Two years later, in 1942, Wiesner joined the RadLab at MIT and soon moved to the lab’s advisory committee; during his time there he was largely responsible for Project Cadillac which worked on the predecessor to the airborne early warning and control system. After World War II, the RadLab was dismantled in 1946 and in its place the RLE was created with Wiesner as the assistant and then associate director and then director from 1947 to to 1961. 1961 was the same year President John F. Kennedy named Wiesner to chair the President’s Science Advisory Committee; Weisner served Kennedy until Kennedy’s death in 1963 and then served President Johnson for one more year and most of his work involved advising the presidents on the space race and on nuclear disarmament. In 1966 Wiesner moved back to MIT as university provost and then President from ’71-’80. With Nicholas Negroponte at his side, he started fundraising for the lab in 1977 while he was president and, again, became co-founder of the lab in 1985 once he had stepped down as president and returned to life as a professor.

The foregoing is my brief history of one side of the Media Lab’s lineage that extends quite far back into the military-industrial complex, and especially the years of the Cold war, by way of Jerome Wiesner. Now I will move on to discuss the corporate, anti-intellectual lineage operating under the guise of “humanism” that runs through Nicholas Negroponte. The son of a Greek shipping magnate, Negroponte was educated in a series of private schools in New York, Switzerland and Connecticut and he completed his education with a MA in Architecture at MIT in the 1960s. You might also be interested to know that his older brother John Negroponte was a deputy secretary of state and the first ever Director of National Intelligence. In 1966, the year Wiesner returned to MIT to become provost, Nicholas became a faculty member there and a year later founded the Architecture Machine Group – a group which took a systems theory approach to studying the relationship between humans and machines. While the Media Lab’s lineage on Wiesner’s side runs through the Research Lab in Electronics and, earlier, the Radiation Lab, on Negroponte’s side the lineage runs through the Architecture Machine Group – a lab which combined the notion of a government sponsored lab with a 1960s-1970s-appropriate espoused dedication to humanism meshed with futurism.

But of course, especially as this particular brand of humanism is always tied to an imaginary future, it’s a particular kind of inhuman humanism that’s began in the Arch Mach group and went on to flourish in the Media lab – it’s one that constantly invokes an imagined future human that doesn’t really exist partly because it’s part of an ever-receding future but also because this imagined future human is only ever a privileged, highly individualized, boundary-policing, disembodied, white, western male human. I think you can see the essence of the Negroponte side of the Media Lab in three projects I want to touch on for the rest of my talk today. The first, from the early years of the Arch Mac group, was unglamorously called the “Hessdorfer Experiment” and is glowingly described by Negroponte in a section titled “Humanism Through Intelligent Machines” in The Architecture Machine, written in 1969 and published in 1970.

In the opening pages of the book, Negroponte mostly lays out the need for “humanistic” machines that respond to users’ environments, analyze user behavior, and even anticipate possible future problems and solutions – what he calls a machine that does not so much “problem solve” as it “problem worries.” (7) His example of what such an adaptive, responsive machine could look like is drawn from an experiment that undergraduate Richard Hessdorfer undertook in the lab the year the book was writen. Writes Negroponte, “Richard Hessdorfer is…constructing a machine conversationalist… The machine tries to build a model of the user’s English and through this model build another model, one of his needs and desires. It is a consumer item…that might someday be able to talk to citizens via touch-tone picture phone, or interactive cable television.” (56)

To help him build this machine conversationalist, Hessdorfer thought it would be useful to bring teletypewriting devices into a neighborhood in the south side of Boston – what Negroponte calls “Boston’s ghetto area.”

sellingTheFutureImages_Page_1

Negroponte writes:

THREE INHABITANTS OF THE NEIGHBORHOOD WERE ASKED TO CONVERSE WITH THIS MACHINE ABOUT THEIR LOCAL ENVIRONMENT. THOUGH THE CONVERSATION WAS HAMPERED BY THE NECESSITY OF TYPING ENGLISH SENTENCES, THE CHAT WAS SMOOTH ENOUGH TO REVEAL TWO IMPORTANT RESULTS. FIRST, THE THREE RESIDENTS HAD NO QUALMS OR SUSPICIONS ABOUT TALKING WITH A MACHINE IN ENGLISH, ABOUT PERSONAL DESIRES; THEY DID NOT TYPE UNCALLED-FOR REMARKS; INSTEAD, THEY IMMEDIATELY ENTERED A DISCOURSE ABOUT SLUM LANDLORDS, HIGHWAYS, SCHOOLS, AND THE LIKE. SECOND, THE THREE USER-INHABITANTS SAID THINGS TO THIS MACHINE THEY WOULD PROBABLY NOT HAVE SAID TO ANOTHER HUMAN, PARTICULARLY A WHITE PLANNER OR POLITICIAN: TO THEM THE MACHINE WAS NOT BLACK, WAS NOT WHITE, AND SURELY HAD NO PREJUDICES. (56-57)

I barely know where to begin with this passage except to say that the entire racist, deceptive undertaking is, for me, about as far away from a humanism that acknowledges the lives of these particular humans as you can get. It also clearly demonstrates what can happen when we believe so completely in the neutrality of the machine as its assumed neutrality – or its assumed capacity to give us pure, unmediated access to reality – can be called on as a magical mechanical solution to any human problems. Got a race problem? Get a computer!

The second project from about a year later, also run through the Architecture Machine Group, is just as disturbing. This time the subjects in the experiment are not African Americans but, rather, gerbils.

sellingTheFutureImages_Page_2

The experiment, called “SEEK,” was exhibited as part of the 1970 show at the New York Jewish Museum called SOFTWARE. It consisted of a computer-controlled environment, contained by Plexiglass and full of small blocks and gerbils who were there to change the position of the blocks following an automatic arrangement of the blocks by a robotic arm. The machine was supposed to analyze the gerbils’ actions and then try to successfully complete the rearrangement according to what the machine thought the gerbils were trying to do. Unfortunately, the experiment was a disaster.

sellingTheFutureImages_Page_3

As Orit Halpern puts it, “The exhibition’s computers rarely functioned…the museum almost went bankrupt; and in what might be seen as an omen, the experiment’s gerbils confused the computer, wrought havoc on the blocks, turned on each other in aggression, and wound up sick. No one thought to ask, or could ask, whether gerbils wish to live in a block built micro-world.” Again, this brand of humanism that’s in the name of the future is one that has very little to do with situatedness (or what’s now called posthumanism) – instead it has everything to do with abstraction and transcendence in the name of producing consumer products or R&D for the military-industrial complex.

The last example I’d like to touch on today is the One Laptop Per Child Project which Negroponte took up as an explicitly Media Lab project in the early 2000s and which, again, continues these same themes of humanism meshed with futurism combined with an espoused belief in the neutrality of the machine.

sellingTheFutureImages_Page_4

The difference now is that even just the guise of academic rigor and a scientific care for method that you could see in the Architecture Machine Group has been transformed, probably because of the lab’s responsibility toward its 80+ corporate sponsors, into the gleeful, continuous production of tech demonstrations, driven by the lab’s other, more ominous motto “DEMO OR DIE.”

OLPC was launched by Negroponte in 2005 and was effectively shut down in 2014. After traveling the world since at least the early 1980s to effectively sell personal computers to developing nations, Negroponte announced he had created a non-profit organization to produce a $100 laptop “at scale” – in other words, the cost of the laptop could only be this low in the early 2000s if, according to Negroponte, they could amass orders for 7-10 million laptops. Essentially, despite his often repeated statement that OLPC is not a laptop project but rather an education project, the essence of the project was still the same as the Hessdorfer experiment or “SEEK”: got a poveryty problem? get a computer! Worse yet, don’t do a study of whether what the community or nation needs are – JUST GET A COMPUTER.

Here’s what Negroponte said in a Ted Talk from 2006, suggesting that even if families didn’t use the laptops, they could use them as light sources:

I WAS RECENTLY IN A VILLAGE IN CAMBODIA – IN A VILLAGE THAT HAS NO ELECTRICITY, NO WATER, NO TELEVISION, NO PHONE. BUT IT NOW HAS BROADBAND INTERNET AND THESE KIDS – THEIR FIRST WORD IS “GOOGLE” AND THEY ONLY KNOW “SKYPE,” NOT TELEPHONY…AND THEY HAVE A BROADBAND CONNECTION IN A HUT WITH NO ELECTRICITY AND THEIR PARENTS LOVE THE COMPUTERS BECAUSE THEY’RE THE BRIGHTEST LIGHT SOURCE IN THE HOUSE. THIS LAPTOP PROJECT IS NOT SOMETHING YOU HAVE TO TEST. THE DAYS OF PILOT PROJECTS ARE OVER. WHEN PEOPLE SAY WELL WE’D LIKE TO DO 3 OR 4 THOUSAND IN OUR COUNTRY TO SEE HOW IT WORKS, SCREW YOU. GO TO THE BACK OF THE LINE AND SOMEONE ELSE WILL DO IT AND THEN WHEN YOU FIGURE OUT THIS WORKS YOU CAN JOIN AS WELL.

Not surprisingly, as with the lack of forethought in the experiment with the poor gerbils, by 2012 studies were coming out clearly indicating that the laptops – whether they were used in Peru, Nepal or Australia, made no measurable difference in reading and math test scores. In fact, one starts to get the sense that Negroponte’s truly remarkable skill which he began honing in the late 60s in the Architecture Machine Group is not design, not architecture, not tech per se, but rather dazzling salesmanship built on a lifetime pitching humanism and futurism via technological marvels. Even Stewart Brand saw this in Negroponte. Quoting Nat Rochester, a senior computer scientist and negotiator for IBM, “[If Nicholas] were an IBM salesman, he’d be a member of the Golden Circle…if you know what good salesmanship is, you can’t miss it when you get to know him.” (6)

And with this, the pitch perfect ending to this strange story is that in 2013, after selling millions of laptops to developing nations around the world, laptops that again made no measurable improvement in anyone’s lives, Negroponte left OLPC and went on to chair the Global Literacy X Prize as part of the XPRIZE Foundation. However, the prize itself no longer seems to exist and there’s no record of him being with the organization just a year later in 2014 – it seems he’s finally, quietly living out his salesman years back at MIT where he began.

XPRIZE, however, does exist and appears to be the ultimate nonprofit based on nothing more than air and yet more humanist slogans:

XPRIZE IS AN INNOVATION ENGINE. A FACILITATOR OF EXPONENTIAL CHANGE. A CATALYST FOR THE BENEFIT OF HUMANITY. WE BELIEVE IN THE POWER OF COMPETITION. THAT IT’S PART OF OUR DNA. OF HUMANITY ITSELF. THAT TAPPING INTO THAT INDOMITABLE SPIRIT OF COMPETITION BRINGS ABOUT BREAKTHROUGHS AND SOLUTIONS THAT ONCE SEEMED UNIMAGINABLE. IMPOSSIBLE. WE BELIEVE THAT YOU GET WHAT YOU INCENTIVIZE…RATHER THAN THROW MONEY AT A PROBLEM, WE INCENTIVIZE THE SOLUTION AND CHALLENGE THE WORLD TO SOLVE IT…WE BELIEVE THAT SOLUTIONS CAN COME FROM ANYONE, ANYWHERE AND THAT SOME OF THE GREATEST MINDS OF OUR TIME REMAIN UNTAPPED, READY TO BE ENGAGED BY A WORLD THAT IS IN DESPERATE NEED OF HELP. SOLUTIONS. CHANGE. AND RADICAL BREAKTHROUGHS FOR THE BENEFIT OF HUMANITY. CALL US CRAZY, BUT WE BELIEVE.

In many ways, XPRIZE is the ultimate Media Lab project spanning the world and whose board includes every major corporate executive you can think of – all to produce not even things anymore but rather just “incentives.” And in terms of the lab itself, while Negroponte seems to be practically retired and Wiesner passed away a number of years ago, the media lab continues to merrily churn out demos and products for consumers and the military under the leadership of Joi Ito – a venture capitalist with no completed degrees, a godson of Timothy Leary and a self-proclaimed “activist,” MIT couldn’t have found a better successor for the world-class salesman Nicholas Negroponte.

Works Cited

Brand, Stewart. The Media Lab: Inventing the Future at MIT. New York: Viking Penguin, 1987.

Halpern, Orit. “Inhuman Vision.” Media-N: Journal of the New Media Caucus, 10:3 (Fall 2014).

Leslie, Stuart. The Cold War and American Science: The Military-industrial-academic Complex at MIT and Stanford. New York: Columbia UP, 1993.

Negroponte, Nicholas. The Architecture Machine: Toward a More Human Environment. Cambridge, MA: MIT University Press, 1970.

—. Soft Architecture Machines. Cambridge, MA: MIT University Press, 1975.