As If, or, Using Media Archaeology to Reimagine Past, Present, and Future

Below is an interview Jay Kirby conducted with me that’s been published in a special section, titled “Media Genealogy” and edited by Jeremy Packer and Alex Monea, of the International Journal of Communication 10 (2016). I’m grateful to Jay, Jeremy and Alex for all they work they did to put this issue together.

*

Abstract: Jay Kirby, PhD student in the Communication, Rhetoric, and Digital Media program at North Carolina State University, conducted this interview with Associate Professor Lori Emerson to focus on her research about how interfaces and the material aspects of media devices affect our uses and relationships with those devices. Emerson, who runs the University of Colorado’s Media Archaeology Lab, explains how we can look at older technology that never became an economic success to imagine what could have been and reimagine what is and what could be. In the Media Archaeology Lab, Emerson collects still-functioning media artifacts to demonstrate these different possibilities. In this interview, Emerson draws on examples from digital computer interfaces, word processors, and other older media to show how their material aspects are bound up in cultural, commercial, and political apparatuses. By bringing these issues to light, Emerson shows how a critical eye toward our media can have far reaching implications.

Keywords: media archaeology, interface, design, Michel Foucault, Marshall McLuhan

Jay Kirby: The first thing I wanted to do is to get a sense of your use of media archeology when you are looking at media. What do you find valuable about the archaeological method? In particular, I would like to know, first, how the archaeological method informs your research and, second, how that might inform your curation of the Media Archaeology Lab.

Lori Emerson: In my writing, teaching, and work in the lab I am often looking for ways to undo or demystify entrenched narratives of technological progress. It’s a bit cliché or tired in the media studies world, but those narratives are so ingrained in our culture that I think all of us have a hard time seeing through what amounts to an ideology. Happily, I’ve found there is a recursiveness to media archeology that allows me to continually cycle back and forth between past and present as a way to imagine how things could have been otherwise and still could be otherwise—it’s a fairly straightforward technique for unsettling these entrenched narratives. Moreover, using media archaeology in this way is not a conventional way to undertake history, but rather it’s a way of thinking you can mobilize to critique the present. I’ve noticed that as media archaeology becomes better known and gains more purchase in academia, scholars who work on media history of any kind call it “media archaeology,” and often their notion of “history” is something quite different from the Foucauldian/Kittlerian lineage of media archaeology I’m invested in.

But also—to get at the second part of your question about the Media Archaeology Lab (MAL)—while it’s perfectly effective to write conventional scholarly pieces on media archaeology, over the last couple of years, as the MAL has expanded and matured, I’ve found that undertaking hands-on experiments in the lab with obsolete but still functioning media from the past is perhaps an even more direct technique for breaking through the seductive veneer of the new and the resulting pull we feel to quickly discard our devices for something that’s only apparently better. New devices are only better if speed is the primary criterion for progress. But what about a machine like the Altair 8800b from 1976? As I ask my students when they come to the lab for the first time, is the Altair really just a profoundly limited version of contemporary computers? Undoubtedly, this eight-bit machine that operates with switches and whose output is flashing red LED lights is slow and difficult (or, just foreign) to operate, but, for one thing, for almost anyone born after the mid to late 1970s, operating this machine in the lab is likely your first direct experience of computing at the level of 1s and 0s. All our contemporary devices are constantly computing 1s and 0s, but we’ve become utterly estranged from how these devices actually work because they’ve been carefully crafted to seem as unlike computing with 1s and 0s as possible. So, my sense is that as you use a machine like the Altair, your contemporary laptop gradually loses its aura of magic or mystery and you start to palpably experience the ways in which your laptop consists of layer upon layer of interfaces that remove you ever more from the way your computer actually works. For another thing, more often than not, using the Altair opens up the possibility for reseeing the past—what if the computer industry took a slightly different turn and we ended up with Altair-like devices without screens or mice? And therefore using this obsolete machine also opens up the possibility of reseeing the present and the future—if we no longer passively accept what the computer industry gives us, what could our devices look like? What do we want them to do?

Jay Kirby: One of the things that strikes me about your work is your examples of interfaces. In Reading Writing Interfaces (Emerson, 2014), you use examples such as Emily Dickinson’s fascicles or typewriter poets. This selection seems to be outside the dominant history and perhaps constitutes a minor history. In this sense you undo assumptions of progress because we are looking at these minor histories that existed but that weren’t played out.

Lori Emerson: Yes, I think you’re right. But I’ve also discovered that, for some reason, concrete poetry is now taught in some form or other in high schools across the U.S. What’s not taught is how these poets were not creating poems of self-expression or poems for close reading—they were showing us how to use and misuse writing media. And of course, Dickinson is far from a minor poet, but, just as with the concrete poets, Dickinson’s wildness is often elided or reduced to cute aphorisms we memorize or close-read.

Jay Kirby: So, when you choose technologies to curate in the lab, is your choice based on how the technologies are part of a minor history, or is it based on how they are misunderstood in the same way as Dickinson and concrete poets?

Lori Emerson: Now that I think about it, I don’t see the oddities in the lab as minor or peripheral in the history of computing. I think of them—and I just recently came across this term from geology—in terms of their place in a branching phylogeny of technological devices. In this way, the Altair 8800b represents a branch off the main line, and it is peripheral only in the sense that it wasn’t an economic success. But certainly, for most people visiting the lab, their initial tendency is to marvel at how “primitive” the machines are, or even how ridiculous or impractical they are. At that point, I try to encourage visitors to slightly reframe their experience from imposing the present on the past to instead experiencing the friction that exists between our present-day interactions with these machines and the way the producers originally imagined and even prescribed our interactions. For example, the manuals in the lab for the Apple Macintosh, released in 1984, describe in minute detail, over many pages, how to double-click, how to you train your finger to click very quickly, and what a window or a file is. Reading the manuals is akin to visiting a foreign land but from the obverse insofar as the manuals defamiliarize where you already live. All of the sudden you start to think, “Oh wow, clicking is not a natural gesture; there was a moment when people really had to think consciously about this gesture and train their bodies to adapt to this physical action.”

Jay Kirby: Now it doesn’t seem that way at all, I guess because double-clicking has become so ubiquitous.

Lori Emerson: I think so.

Jay Kirby: I’d like to talk about power in relation to these technologies. How do you see the relationships between power and knowledge in the creation of these interfaces? Who are the players, and what happens when the interface is either present, as you talk about early on in your book, or absent or transparent, as with later interfaces?

Lori Emerson: What do you mean by players? Do you mean people or technology?

Jay Kirby: I like to think of them on somewhat equal levels. When an interface is being designed, who or what influences decisions? And how do those decisions rearticulate relationships between knowledge and power?

Lori Emerson: When I was doing research for my book, I became fascinated with interfaces from the 1970s, especially ones related to SmallTalk and the Xerox Star, that were teetering right on the precipice of being designed for the novice as well as the expert. Now, I have never had the opportunity to actually use a Xerox Star—they are incredibly rare and most of them are in museums now—so I had to piece together my understanding of this machine by looking at manuals, magazines, and screenshots from the 1980s. But it seemed to me that interfaces like the one in the Star opened up possibilities for us not to have to live in the either/or scenario of being a user or an expert. This binary was a marketing ploy, advanced especially by Apple, to make people believe that you could only ever have a machine that was either for one or the other, and since most people identified as novices, so the logic went, your only choice was to buy a “user-friendly” Macintosh. Apple made the underlying workings of the Macintosh inaccessible or invisible so that you would never know how it worked. Moreover, Apple tried to nudge you into thinking that you’d never need to know.

Jay Kirby: So, it was a marketing and design decision to create an interface that made the underlying mechanisms invisible, as a way to create a false division between novice and expert?

Lori Emerson: Yes, I think so. There were interfaces proposed in the late ’70s that allowed those two groups, the experts and the novices, to use the same machine; the novice could use the ready-made tools included in the system, while the expert had the ability to create their own tools or even create tools to create more tools. But, to get back to your question about the relationship between power and knowledge, I want to make clear that the design and choice of interface is not a minor technical detail—it’s not just that interfaces could have been otherwise, but instead that interfaces determine how and what you create on your machine, and the choice of one over another opens up or forecloses on possibilities.

Jay Kirby: Interfaces rearrange the relationships between power and knowledge.

Lori Emerson: Yeah. While there’s no doubt that Apple had its eye on the untapped market of the novice user, in order to maintain their monopoly on this market share over the long term, they had to design an interface that was not just easy to use but that also disempowered the user so they eventually came to think there was no need for them to understand how their machine worked or how it was acting on them, rather than them acting on their machine. And of course, developing this mind-set in consumers has had long-term, cross-generational repercussions as these “user friendly,” out-of-the-box machines found their way into homes and schools and became the first computer that many children used.

Jay Kirby: I am curious about your conception of how media technology, the interface, and the human interact. You drew on Marshall McLuhan in your book, but I felt as if Friedrich Kittler was also present. I’ve always read them as being, to a certain extent, opposed, where McLuhan seems to have the user extended through media and Kittler seems to posit media as something imposed on the user.

Lori Emerson: Kittler doesn’t come into my book obviously, but he’s very present in terms of how I’m thinking about media poetics and about rereading the history of experimental 20th and 21st century writing as histrionics of media, as expressions of the histrionics of media, as Kittler puts it. Kittler helped me read these strange photocopies of photocopies of photocopies by concrete poets from the 1960s and 1970s not for what the blurred text says but for how these texts are recordings of media facts. McLuhan was more obviously useful for the chapter on concrete poetry because he so clearly influenced and was influenced by these poets; he was one of the first to mesh together literary and media studies to argue that poets are “probes” into the limits and possibilities of writing media. I’ve never seen McLuhan and Kittler as incompatible, and I have to admit I sometimes think intellectually lazy simply to claim that McLuhan was anthropocentric and Kittler was not.

Jay Kirby: Right.

Lori Emerson: Just in the last couple months, probably from teaching McLuhan for the 12th or 13th time, I’ve come to see that McLuhan and Kittler are much closer to each other than you might think. McLuhan does say that media first act as extensions of “man.” But if you just combine the two famous McLuhanisms, “media are the extensions of man” (McLuhan, 1994, p. 4) and “the medium is the message” (pp. 7–8), you can see there’s a strange hinge moment where media first extend certain human capabilities, but then they turn back on the human and shape the human. McLuhan’s entire theory of how media work falls apart if media don’t come back and shape humans. I understand that his entire system for understanding media begins and ends with humans, but at the same time he knows that each medium plays a fundamental role in determining what you can do and how you can do it. Kittler, to me, comes in at that hinge point and just follows the line of thought extending from the medium to the human.

Jay Kirby: You’ve already mentioned in our discussion the notion of user-friendliness, which seems to illustrate one part of this mutually influential relationship between humans and technology—the way in which design decisions determine how we use our computers, which in turn shape us as users. In Reading Writing Interfaces you note a shift in what user-friendly means. Why do you think this shift occurred?

Lori Emerson: As I mentioned briefly earlier, I think most of it had to do with economics. How long were we going to go without trying to make personal computers as profitable as possible?

And the minute you try to make them profitable, you are also going to have to standardize them, which involves creating a notion of the standard user who needs their computer to be “user-friendly.” I’m not sure anything like a standard user exists—it was created by companies like Apple through persistent and clever marketing to convince people they should identify as standard users. By contrast, in the ’70s, when the computer was not yet very profitable and it was still a niche market item for tinkerers and the curious, it was marketed in more philosophical terms. My favorite ad from that era is for Logo, a learning-oriented programming language. In an issue of Byte magazine from 1982 you can find an ad that describes Logo as “a language for poets, scientists, and philosophers” (Logo, 1982). Incredible! At this time, computers were more about learning and creativity—open-ended learning and creativity.

Jay Kirby: This idea of moving from open-ended play and creativity into something more limited makes it seem as if there is some sort of power constraining us. Michel Foucault discusses this limiting and controlling aspect of power, but he also says there can be a productive element to the exercise of power. Do you think the shift away from open-ended play and creativity is entirely negative?

Lori Emerson: People will always find a way to be playful and creative with the tools they’re given. In terms of the shift toward user-friendly design, I think every technology should steer clear of calling itself user-friendly because of the way that term is now associated with disempowering users. Without user-friendly design, we would never be able to type. Or, perhaps I should say that even though a keyboard design such as QWERTY is not the most efficient, its utter ubiquity has turned it into a kind of user-friendly design. Also, importantly, QWERTY does not disempower users so much as it slows down their typing. The QWERTY keyboard works well because it has become naturalized and invisible as a result of its ubiquity, so you no longer have to think about the act of typing itself. So the user-friendly does have some value, but, to go back to my earlier point, that value is lost once the user-friendly disempowers us and once it’s leveraged against us through the creation of a false binary between the novice and the expert user, between the creation of a machine that’s easy to use and one that allows you to build more tools. The interfaces from the 1970s that I talk about in my book show this binary isn’t necessary and it wasn’t necessary for a while.

Jay Kirby: Maybe your last point can return us to the question of what changed. You mentioned the economy. But is there something we should be doing, perhaps through pedagogy, to help people look at interfaces differently?

Lori Emerson: Good question. That is what I use the Media Archeology Lab for. I sit people down at, say, an Osborne I computer, and I invite them to use WordStar, which is entirely text-based and requires you to use about 90 different commands. Next, I ask them to read WordStar against Microsoft Word so that they can begin to actually see how other word processors have different or more capabilities than Word, and hopefully they begin to realize Word isn’t natural—it isn’t the only, or even the best, word processor. There are other ways you can process your documents and have very different, creative results. So to me, pedagogically, the best way to get students to think critically about interfaces is to read the past and the present against each other.

Jay Kirby: I wonder whether we short-circuit some sort of learning if you use an interface you immediately understand?

Lori Emerson: Is there such a thing as an interface you immediately understand?

Jay Kirby: I don’t know. I remember The New York Times ran a story about technology executives sending their children to the Waldorf School, which does not use computers (Rictchel, 2011). The idea was that children should experience certain types of learning without the computer interface. Do you think these more transparent interfaces can short-circuit learning?

Lori Emerson: I’m guessing the Waldorf Schools recognize that primarily what’s lost when we use contemporary digital computer interfaces is a mode of learning and processing from print culture. Most of the skills we teach and test in schools are still based on print culture, so in that sense I can understand why one might think it’s beneficial to keep children away from computers in their early years. For me the main problem is not whether learning takes place via digital or analog devices; the problem is the way particular kinds of interfaces become naturalized, when we start to think that there’s only one way to interact with our computers and passively accept whatever the computer industry hands down to us.

Jay Kirby: When I first encountered a computer, it was a command-line interface. It was MS-DOS. Many of my students have never experienced it. Is an experience like seeing a command-line interface helpful for understanding interfaces?

Lori Emerson: Yes, I think so. And I also don’t think that experiencing the command-line interface requires a lot of expertise. I can write out a couple commands on the board and ask students to open up terminal, and all of a sudden they can have that experience. They’re accessing the same information as they might via a graphical user interface, but, through the command line, they can see how a different interface offers an utterly different perspective on the same information. So, yes, I think you should experience the command line. But I also don’t think students need to take years to learn computer programming. Just typing a couple lines of code into terminal can be very revelatory.

Jay Kirby: And many people aren’t going to go and study computer science after that experience. So what does your average person gain from the experience of using the command line?

Lori Emerson: I was recently reading about a famous conversation that took place between Foucault and Noam Chomsky1 that made it clear Foucault was interested in finding ways to denaturalize political discourse. That’s no small thing. It’s no small thing to denaturalize the tools that we use every single day. So, helping the average person to see how much their access to information is determined by mechanisms that they have no control over and that shape their access to knowledge and creation is profound.

Jay Kirby: So there is a political dimension to it?

Lori Emerson: Absolutely.

Jay Kirby: To return to the argument you lay out in your book, you move forward from the command line to graphical user interfaces and, more recently, to gestural interfaces. Each of these developments seems to make the interface more transparent or more difficult to perceive. Do you have to have that transparency—in the more negative sense of removing access to elements of the interface—if you move from the command line to a GUI to a so-called natural interface?

Lori Emerson: No, not at all. That was the point I was trying to make in the second chapter. There are not only other interfaces but also other visual interfaces. It is not necessary to move from command line to graphical user interfaces. It’s just a continuation of a line of thought that has come to dominate computing. Here is an example that I didn’t talk about in my book. The Canon Cat computer was developed by Jeff Raskin—you remember that Jeff Raskin was on the design team for the Apple Macintosh. I think he left in 1982 because of a disagreement with Steve Jobs. Then he worked on the Cat, which Canon eventually bought. Raskin designed the Cat to have an interface that was entirely text based—not command line, not graphical user interface, but text based. This was 1987. He called it an advanced work processor, not a word processor and not a personal computer.

Jay Kirby: What did that look like?

Lori Emerson: It’s this cute little beige computer with a handle on the back for portability. It has no mouse; instead, all the functionality is built into the keyboard. And it has all sorts of unusual functionalities like “leap,” for example, which is a sophisticated version of search and find that we don’t have today.

Jay Kirby: Interesting. These less common computers, as you note, give insight into what could have been. Another element that interested me in your work was that your examples of people who interrogate these interfaces are artists. In a way, artists are also less common. Are there ways that nonartists can or should be interrogating interfaces? How might one cultivate a critical approach to understanding interfaces in an everyday way?

Lori Emerson: The first answer that comes to mind is that tinkering, play, and creativity are open to anybody and everybody. And in fact, creating glitch art is now accessible to anyone. There is glitch software and step-by-step instructions online that show you how you can get into the code of a digital image and glitch it from within, turning it into a Word document or a text document. You can also take any function on your computer and push up against it. Anything. Ask yourself, is it possible to break it? How do I misuse it? What are some ways this function could work that the manufacturer didn’t anticipate?

Jay Kirby: That is a good example of an accessible way to understand interfaces differently—and related to another new development I see in computers and interfaces, which is the surge in popularity of microcontrollers like Raspberry Pis or Arduinos. How do these fit into this archaeological cut between the transparent interfaces of many computers today and these older pieces of technology?

Lori Emerson: I have some in the lab, and I believe they stand as wonderful interventions into this culture of passively consuming software/hardware configurations. Our Raspberry Pi is very small and affordable. You can see how it works, and you can use it to make other computers to build on top of it. But, to complicate what I just said about how tinkering is open to anyone, I still worry about accessibility. Even if the price of a Raspberry Pi isn’t much more than the price of a book, I worry especially about gender and how the culture around the machines may not be amenable to or welcoming for women and minorities. I know there are women, for example, that are incredibly adept at playing with Arduinos and Raspberry Pis, but I don’t know any in Boulder. None have shown up at my doorstep. I have no doubt they exist, but at the same time, I know women are a minority in this community.

Jay Kirby: As these microcontrollers look so different from what we might think of as a computer today, do you believe the aesthetics of these objects play into understanding computers and interfaces? I’m thinking of how these early computers came to us as chunks of metal, versus contemporary devices that are almost all screen.

Lori Emerson: Yes. While I was teaching last week, I was thinking about how we only ever look at screens, and how we are never aware of how there is another world behind them. It’s as if the screen was created so you would only look at it rather than think about its situatedness, its constructedness.

Jay Kirby: I understand. There is even a difference between the old CRT screens that have depth—and even though that is not the computational part, there is the idea there is something back there—versus these iMacs that are sheer screen . . .

Lori Emerson: Yes. Or, think once more about the Altair, how it had no screen and yet it was a perfectly functional computer.

Jay Kirby: Exactly. Or the Arduinos.

Lori Emerson: That’s right. It’s difficult because everything has to be—is this Apple ideology?—everything has to be “light and airy.”

Jay Kirby: Not only user-friendly . . .

Lori Emerson: It can’t have heft, or bulk, or weight.

Jay Kirby: As we talk about what these computers look like and what they do, what do you look for in a piece of technology when you’re thinking about adding it to the Media Archaeology Lab’s collection? What makes a good candidate for the lab?

Lori Emerson: I’m always looking for alternative visions of what could be, anything that is odd and unusual, as well as anything that is ubiquitous. Those two poles. It’s important to have Apple Macintoshes in the lab along with the whole lineage of Apple computers because of how much they’ve influenced the computer industry. At the same time, you have to have the oddities or the outliers for reasons I’ve already touched on. I should also mention we are starting to collect analog media, or any kind of media that archaeologically underlies our contemporary media. For example, we just acquired an Edison Diamond Disc phonograph from 1912 from a used furniture store in Boulder. The phonograph came with 30 discs, and each has a large warning on the outside of the record sleeve that says something like, “You may not use this photograph disc with any other machine other than the Edison. If you do, you will destroy the needle and you will destroy the record.” Once you place this warning beside any contemporary proprietary technology, you see quite clearly that the notion of proprietary technology did not originate with Apple or Microsoft; it has a long lineage going at least as far back as Edison. It’s also utterly American.

Jay Kirby: Yeah. That is really fascinating. I guess at that time the phonograph wasn’t yet standardized.

Lori Emerson: As far as I know, Edison and Victrola were competing not just for the largest share of the market but also to make their respective machines the standard.

Jay Kirby: Perhaps this is a good place to talk about your current project, as you’ve been moving from discussing the standardization of interface technology to discussing the standardization of Internet protocols, in particular TCP/IP. Can you tell us more about what you are doing with this project?

Lori Emerson: Yes, thanks for asking about that. “Other Networks” began with an innocent question Matthew Kirschenbaum asked me at the Modern Languages Association annual convention a couple years ago. He asked me whether I talk about the ’90s in Reading Writing Interfaces, and I said no, I don’t, and immediately wondered why it didn’t seem to make sense to have a chapter dedicated to that decade. I think the reason is because the ’90s are not so much about hardware and software; they are instead more a continuation of hardware/software design principles that had been standardized by the late 1980s. Instead, in terms of digital media, the ’90s are more about networks and the so-called explosion of the Internet.

So with this new project, I wanted to see if I could extend the logic of media archaeology to look at the materialist underpinnings, the ideological underpinnings, of the Internet—to imagine how it could have been otherwise, which then led me into looking into the particulars of TCP/IP, the protocol that allows all the different networks on the Internet to communicate to each other. That in turn led me to dig through manuals and textbooks on TCP/IP and browse the thousands of requests for comments, or RFCs. These are basically a series of online memos recording people’s proposals and decisions to tweak TCP/IP, and, among other things, the RFCs record the development of TCP/IP and its official adoption in 1982 or 1983. What I was trying to do was to trace the economic, institutional, and philosophical pressures that went into creating TCP/IP. At the same time I was also thinking about what other protocols were up for debate and what difference those might have made to our experience of the Internet today. As it turns out, there were alternatives and there still are alternatives, like the network architecture RINA, but my sense is that it’s been difficult to convince people that a new or different protocol might be beneficial because these alternatives wouldn’t make a dramatic difference to our experience of the Internet. I think people want to hear about some version of the Internet that’s completely new and alien and, as far as I know, this just doesn’t exist.

Jay Kirby: So for whom or for what would these alternative protocols make a difference?

Lori Emerson: Well, this computer scientist I have been talking to—John Day, who is at Boston University—his argument is that a different structure for TCP/IP might have made the entire Net neutrality debate moot. He believes that a particular layer in TCP/IP, the transport layer, is flawed. The transport layer is what makes possible the entire discussion about slow lanes and fast lanes, because there the Internet has a longstanding problem with congestion. So, if the designers of TCP/IP had managed to put together a different set of layers and a different configuration—maybe not even layers—there wouldn’t be a congestion problem and we wouldn’t need to have this discussion about Net neutrality.

Jay Kirby: This seems to relate back to the idea of interfaces, too. The interfaces can affect the relationships of knowledge and power. Do you conceive of TCP/IP along the same lines as an interface? Rather than a person interfacing with technology or writing, TCP/IP allows for computers to interface with each other. Is this correct?

Lori Emerson: On the surface there is a perfect corollary to the way TCP/IP is structured and the way interfaces were designed for personal computers, both of which were developed around the same time. TCP/IP is structured according to layers. This model of layers was apparently imported from models for how operating systems were conceived of in the late ’60s, and then it was just carried over from operating systems into networks. However, there seem to be significant differences in how terms like “interface”—and even “black box”—are mobilized in the two spheres. For example, the layers that constitute TCP/IP are separated by what engineers refer to as interfaces, so I first assumed this meant those interfaces function in the same way that an interface does for us as users. It turns out this isn’t the case. What the designers of TCP/IP have done is create interfaces that allow the layers to communicate with each other insofar as one layer picks up the task of conveying bits where the lower layer left off. The interfaces between layers also black box the layers from each other—the idea is that if any one of the layers stops working, the entire system should not be affected because the layers have been separated from each other.

Jay Kirby: This is a positive use of black boxing.

Lori Emerson: Yes, exactly. I understand now there’s a way in which black boxing and layering is sometimes very useful, whereas I had previously assumed that black boxing and layering only insert more barriers to access for the user.

Jay Kirby: This speaks to what you said about how users don’t always want the interface to be present. Sometimes users want it to recede from view.

Lori Emerson: Fade into the background.

Jay Kirby: As a way to wrap things up, what do you believe people should be attentive to when they are using an interface? Or what should people hope for in an interface?

Lori Emerson: I am wary of any system, any interface, that claims to do things for me and doesn’t allow me to either do it myself or to understand how it’s been done for me and then intervene in some way so that I can do it in whatever way I think is appropriate. This patronizing attitude toward the user is harmful.

Jay Kirby: So that’s what people should be wary of. And you’ve implied an answer to the second part of my question, about what you want from an interface.

Lori Emerson: I want an interface that is configurable and flexible according to my needs. It may come with certain defaults, but I need to be able to configure it to do what I want it to do.

Jay Kirby: That’s an interesting idea. Not long ago TIME did their Person of the Year as “You,” by which they meant that the individual can now get whatever they want. But at the same time, there is this idea that someone else will give you exactly what you want. It seems a sort of preemption. Rather than “I want x, y, or z,” companies state that “you want x, y, and z.”

Lori Emerson: Oddly, though, I think both positions usually amount to the same thing, as companies such as Facebook offer you the appearance of a proliferation of choice, the illusion that we can make our experience of Facebook exactly as we’d like it—when, of course, we’re only ever offered predetermined choices. If TIME’s Person of the Year is “You,” then this “you” is a corporately controlled version that leads you to believe you’re somehow an empowered user with the freedom to customize anything and everything.

Jay Kirby: Yeah. It’s the commodification of choice rather than choice as choice.

Lori Emerson: That’s right. Rather than open-ended choice, it’s like choosing between Coke and Pepsi, which really isn’t a choice at all.

Jay Kirby: No more RC Cola.

Lori Emerson: Yeah. And no more Fanta. It’s like the 1980s standardization of the personal computer all over again!

References
Chomsky, N., & Foucault, M. (2006). The Chomsky-Foucault debate: On human nature. New York, NY: New Press.

Emerson, L. (2014). Reading writing interfaces: From the digital to the bookbound. Minneapolis, MN: University of Minnesota Press.

Logo. (1982, February). [Advertisement for Logo]. Byte, 255.

McLuhan, M. (1994). Understanding media: The extensions of man. Cambridge, MA: MIT Press.

Rictchel, M. (2011, October 22). A Silicon Valley school that doesn’t compute. The New York Times. Retrieved from http://www.nytimes.com/2011/10/23/technology/at-waldorf-school-in-silicon-valley-technology-can-wait.html

Advertisements

Minitel, 1978-2012 // an other network

Pages from Mintel scan

The premise of “Other Networks,” the book project I’m working on right now, is simple and draws heavily from the premise of the Media Archaeology Lab: uncover what was, what could have been, to reimagine what still could be. This mantra applies just as much to the dead-ends in computer hardware and software you can find in the lab as much as it does to protocols and networks.

In terms of networks that existed before and outside of the Internet, networks I hope could reignite our sense of what the Internet of the future could look like, the French network Minitel (which lasted from 1978 until 2012) is a fascinating case study I will have to spend substantial time researching and writing about as it was the result of heavy government support but also corporate interests along with the usual inventive misuses of the network by everyday consumers. And as with so many networks I will discuss in “Other Networks,” Minitel is no longer well known in the English speaking world. While, thankfully, scholars such as Kevin Driscoll and Julien Mailland are working on a much needed English-language academic monograph on Minitel, at the moment the only book on it in English I’m aware of is Marie Marchand’s A French Success Story…The Minitel Saga, published in 1988 and translated by Mark Murphy. The book is long out of print and at the moment I don’t see any used copies for sale online (although sometime last year I did spot a copy for sale for a stunning $800). As such, in the name of access to valuable information on the thousands of networks that existed before the U.S. consolidated them by way of TCP/IP, I thought I’d make available a pdf of this valuable volume (pdf, 25 MB).

Minitel was proposed and adopted in 1978, the same year that Simon Nora and Alain Minc submitted their enormously influential report The Computerization of Society in which they coined the term ‘telematics’. Minitel also came on the heels of a “phone-in-every-home” program (pre-dating the One Laptop Per Child project by many decades) that was proposed in 1975 by then-general manager of France Telecom Gérard Théry to increase subscriber telephone lines; Théry firmly believed the telephone was going to be the cornerstone of any computerized country – “A phone in every home is the cutting edge of a computer in every home.” Just three years later in 1978, 2500 people in the Parisian suburb of Vélizy had volunteered to use the system initially called “Teletel.” Marie Marchand tellingly writes in A French Success Story…The Minitel Saga that while “households used the system six times a month on average, consulting 20-odd services for a total connect time of one-and-a-half hours per month…”, “These overall figures concealed a number of pronounced disparities…Age disparity: people under 30 used their terminals more than those over 30. Gender disparity: women used them but little. Class disparity: top exectuives connected more often than midle management types, who in turn called more often than blue collar workers. Further, a flagrant disparity emerged in terms of services used. Five service providers alone accounted for over half the calls…” (53)

In 1982, instead of delivering increasingly expensive and difficult to update telephone book directories, France Telecom loaned Minitel terminals to residents all across France. By 1988, 3.5 million Minitel sets had been installed with users logging six million hours per month and taking advantage of 8000 services. And by 1999, roughly 9 million terminals could access to the network in turn used by 25 million people who took advantage of 26,000 services. By this time, not only had Minitel inaugurated an era that continues today of disparities between visible and invisible (or even absent) users based on gender, race, sexuality, and socio-economic status, but it also inaugurated the era of online pornography, the use of networks to coordinate student protests, and experiments with pseudonymous online identity.

For more on this immensely influential French “other network,” download a pdf of A French Success Story…The Minitel Saga.

workshop // Othernet, Alternet, Darknet

Once more, thank you very much for inviting me to talk with you about “Other Networks” and give a workshop on “Othernet, Alternet, Darknet // the Past, Present, and Future of Alternate Networks.” In preparation for today’s workshop I suggested you read “Against the Frictionless Interface! An Interview with Lori Emerson” and “What’s Wrong With the Internet & How to Fix It: An Interview with John Day.”

Before I build on these readings with a more extensive discussion of TCP/IP, I would like to discuss what it currently means to be on the Internet for many people and then show you a couple tools that make it alarmingly clear the way in which profit and capital saturates every single one of our clicks online. To that end, I’d like you to download a couple revealing extensions to your Chrome browser and/or an add-on to your Firefox browser to clearly visualize what’s happening when you’re on the web ; I use both browsers so I encourage you to download both but it’s also fine if you just want to play with one

  • open up Firefox and install Lightbeam – an add-on that “shines a light on who’s watching you” by way of interactive visualizations that show you the first and third party sites you’re often unwittingly interacting with on the web
  • now open up Chrome and install Disconnect – a browser extension that stops major third parties from tracking the webpages you go to
  • have any of you used these tools before? anything revealing or surprising?

Now I’d like to talk about alternatives to the current structure of the Internet, beginning with a brief overview of how TCP/IP itself could have been different (picking up the interview with John Day), leading to a different present-day Internet, and then moving on to contemporary projects and platforms you might use to get off or disrupt the Internet. I will touch on the following:

  • how thinking about the past and present of networks could be a way to imagine the future of our connected lives
  • how excavating the knowledge/power structures underlying TCP/IP can denaturalize that monolith “the Internet” and help us think about how the Internet could be otherwise. In particular, I discussed:
    • how TCP/IP was created to benefit the free market, not necessarily to exemplify democratic ideals of freedom and openness
    • the result of intense, complex political wrangling between communities of engineers, industry workers, and representatives who were almost uniformly white, middle class men often from the same school or neighborhood
    • how the protocol is based on concepts of blackboxing and layering taken from the design of operating systems rather than networks
    • how there were and still are alternatives to TCP/IP such as RINA that could potentially make the Internet work better than it currently does

figure2

With this groundwork, I would like to use the rest of the workshop to think as expansively, broadly, and imaginatively about what an alternative Internet might look like – one that we built ourselves, imagining for the moment that we can build whatever structure we dream up.  Here, then, are some contemporary examples of Other Networks I would like you to explore and/or experiment with:

  • Netless, created by Danja Vasiliev
  • Alternet, created by Sarah T. Gold
  • Firechat, created by Open Garden
    • create an account, see if you can find the #OtherNetworks chatroom I created, and start talking to each other
  • PirateBox, created by David Darts
    • if there’s time, I will demo a PirateBox I built to prove to you that even the most inept Internet user can do it
  • Tor, created by the United States Naval Research Laboratory and DARPA
    • read my notes and warnings below, download Tor, and try accessing the links I include below

Because Tor has become synonymous with criminal activity, for the sake of educating you, here is a bit more on what Tor is and why you might like to use it. Tor is primarily a privacy network that allows you to access the surface Internet without being tracked; it also allows you to access the deep web/darknet – any site or material that’s on the Internet but not indexed by search engines; keep in mind that most of the deep web/darknet is dedicated to innocent forums, blogs, essays and so on; because of the protection it offers, the darknet is attractive to activists in oppressive regimes as well as government agencies.

Why use Tor? While the Tor browser will work much slower than Chrome or Firefox, if you value privacy or if you would like to find a way to circumvent the online tracking we discussed earlier, you might like to give it a try. You might also give it a try if you would like to become a more informed, more active Internet user.

Some warnings:

    • not surprisingly, Tor does not guarantee perfect anonymity; if you don’t use a Virtual Private Network in addition to Tor, people can still see you’re using Tor even if they can’t necessarily see what sites you’re visiting; hopefully it goes without saying that you shouldn’t use a university VPN – instead consider purchasing the very inexpensive IPVanish and take a look at tips here and here to understand better how VPN works with Tor
    • don’t Torrent over Tor and especially don’t use BitTorrent and Tor together
    • according to the Tor website, avoid opening  .doc and .pdf documents while on Tor as there seems to be a way to reveal your IP address once you do this
    • try to use HTTPS versions of websites; Tor encrypts your traffic to and within the Tor network but to ensure encryption at your final destination, try to also use the HTTPS Everywhere extension
    • to make sure you’re not tracked down if you inadvertently visit a website that’s criminal in nature, turn off scripts and plugins in the Tor options (according to their website, you do this by clicking the button just before the address bar).
    • be very cautious about clicking on links on Tor – try to only use known directories to reach authenticated destinations.

Here are a very few safe Tor links that have worked for me:

  • search engine TORCH at http://xmh57jrzrnw6insl.onion/
  • search engine DuckDuckGo at http://3g2upl4pq6kufc4m.onion/
  • the first issue of a Tor-hosted literary journal, The Torist (pdf) at http://toristinkirir4xj.onion/issue1.pdf
  • and, surprisingly, Facebook! at https://www.facebookcorewwwi.onion

If you’d like to continue thinking about these issues post-workshop, one place to start is to think about the repercussions of the underlying structure of the Internet – especially in the context of how the structure might create a certain power dynamic that excludes (women, minorities, underprivileged communities, those who are less technically savvy) more than it includes. Questions I’ll leave you with:

  • What does a cooperatively owned Internet look like and why might we want one? If you need help getting started, consider checking out Platform Cooperativism.
  • What does a non-profit, non-commercial network look like?
  • What does a feminist network look like? Can the Internet be feminist? These 15 “Feminist Principles of the Feminist Internet” might help you get started.  You might also like to look at this interview with Jac sm Kee who has been deeply involved in the Association for Progressive Communication’s (APC) Women’s Rights Programme; Kee states that “to start, a feminist Internet is one where everyone has universal, equal and meaningful access to an open and transformative Internet to enable the exercise of all of our rights, to play, to create, to form communities, to organize for change, in freedom and pleasure.”

What’s Wrong With the Internet and How We Can Fix It: Interview With Internet Pioneer John Day

I appreciate very much the willingness of the editors at Ctrl-Z: New Media Philosophy to publish an updated, revised version of an interview I conducted with the computer scientist and Internet pioneer John Day via email. The published version is available at the link above and the original version is below.

The interview came about as a result of a chapter I’ve been working on for my “Other Networks” project, called “The Net Has Never Been Neutral.” In this piece, I try to expand the materialist bent of media archaeology, with its investment in hardware and software, to networks. Specifically, I’m working through the importance of understanding the technical specs of the Internet to figure out how we are unwittingly living out the legacy of the power/knowledge structures that produced TCP/IP. I also think through how the Internet could have been and may still be utterly different. In the course of researching that piece, I ran across fascinating work by Day in which he argues that “the Internet is an unfinished demo” and that we have become blind not only to its flaws but also to how and why it works the way it works. Below you’ll see Day expand specifically on five flaws of the TCP /IP model that are still entrenched in our contemporary Internet architecture and, even more fascinating, the ways in which a more sensible structure (like the one proposed by the French CYCLADES group) to handle network congestion would have made the issue of net neutrality beside the point. I hope you enjoy and many, many thanks to John for taking the time to correspond with me.

*

Emerson: You’ve written quite vigorously about the flaws of the TCP/IP model that go all the way back to the 1970s and about how our contemporary Internet is living out the legacy of those flaws. Particularly, you’ve pointed out repeatedly over the years how the problems with TCP were carried over not from the American ARPANET but from an attempt to create a transport protocol that was different from the one proposed by the French Cyclades group. First, could you explain to readers what Cyclades did that TCP should have done?

Day: There were several fundamental properties of networks the CYCLADES crew understood that the Internet group missed:

  • The Nature of Layers,
  • Why the Layers they had were there,
  • A complete naming and addressing model,
  • The fundamental conditions for synchronization,
  • That congestion could occur in networks, and
  • A raft of other missteps most of which follow from the previous 5, but some are unique.

First and probably foremost was the concept of layers. Computer Scientists use “layers” to structure and organize complex pieces of software. Think of a layer as a black box that does something, but the internal mechanism is hidden from the user of the box. One example is a black box that calculates the 24 hour weather forecast. We put in a bunch of data about temperature, pressure and wind speed and out pops a 24 hour weather forecast. We don’t have to understand how the blackbox did it. We don’t have to interact with all the different aspects it went through to do that. The black box hides the complexity so we can concentrate on other complicated problems for which the output of the black box is input. The operating system of your laptop is a black box. It does incredibly complex things but you don’t see what it is doing. Similarly, the layers of a network are organized that way. For the ARPANET group, BBN [Bolt, Barenek, and Newman] built the network and everyone else was responsible for the hosts. To the people responsible for the hosts, the network of IMPs was a blackbox that delivered packets. Consequently, for the problems they needed to solve, their concept of layers focused on the black boxes in the hosts. So the Internet’s concept of layers was focused on the layer in the Hosts where its primary purpose was modularity. The layers in the ARPANET hosts were the Physical Layer, the wire; IMP-HOST Protocol; the NCP; and the applications, such as Telnet, and maybe FTP.[1] For the Internet, they were Ethernet, IP, TCP, Telnet or HTTP, etc. as application. It is important to remember that the ARPANET was built to be a production network to lower the cost of doing research on a variety of scientific and engineering problems.

The CYCLADES group, on the other hand, was building a network to do research on the nature of networks. They were looking at the whole system to understand how it was supposed to work. They saw that layers were more than just local modularity but a set of cooperating processes in different systems, and most importantly different layers had different scope, i.e. number of elements in them. This concept of the scope of a layer is the most important property of layers. The Internet never understood its importance.

The layers that the CYCLADES group came up with in 1972 were the following: 1) the Physical Layer – the wires that go between boxes. 2) The Data Link Layer – that operates over one physical media and detects errors on the wire and in some cases keeps the sender from overrunning the receiver. But most physical media have limitations on how far they can be used. The further data is transmitted on them the more likely there are errors. So these may be short. To go longer distances, a higher layer with greater scope exists over the Data Link Layer to relay the data. This is traditionally called the 3) Network Layer.

But of course, the transmission of data is not just done in straight lines, but as a network so that there are alternate paths. We can show from queuing theory that regardless of how lightly loaded a network is it can have congestion, where there are too many packets trying to get through the same router at the same time. If the congestion lasts too long, it will get worse and worse and eventually the network will collapse. It can be shown that no amount of memory in the router is enough, so when congestion happens packets must be discarded. To recover from this, we need a 4) Transport Layer protocol, mostly to recover lost packets due to congestion. The CYCLADES group realized this which is why there is a Transport Layer in their model. They started doing research on congestion around 1972. By 1979, there had been enough research that a conference was held near Paris. DEC and others in the US were doing research on it too. Those working on the Internet didn’t understand that such a collapse from congest could happen until 1986 when it happened to the Internet. So much for seeing problems before they occur.

Emerson: Before we go on, can you expand more on how and why the Internet collapsed in 1986?

Day: There are situations where too many packets arrive at a router and a queue forms, like everyone showing up at the cash register at the same time, even though the store isn’t crowded. The network (or store) isn’t really overloaded but it is experiencing congestion. However in the Transport Layer of the network, the TCP sender is waiting to get an acknowledgement (known as an “ack”) from the destination that indicates the destination got the packet(s) it sent.  If the sender does not get an ack in a certain amount of time, the sender assumes that packet and possibly others were lost or damaged re-transmits everything it has sent since it sent the packet that timed out.  If the reason the ack didn’t arrive is that it was delayed too long at an intervening router and the router has not been able to clear its queue of packets to forward before this happens, the retransmissions will just make the queue at that router even longer.  Now remember, this isn’t the only TCP connection whose packets are going through this router.  Many others are too. And as the day progresses, there is more and more load on the network with more connections doing the same thing.  They are all seeing the same thing contributing to the length of the queue.  So while the router is sending packets as fast as it can, its queue is getting longer and longer.  In fact, it can get so long and delay packets so much, that the TCP sender’s timers will expire again and it will re-transmit again, making the problem even worse. Eventually, the throughput drops to a trickle.

As you can see, this is not a problem of not enough memory in the router; it is a problem of not being able to get through the queue. (Once there are more packets in the queue than the router can send before retransmissions are triggered, collapse is assured.)  Of course delays like that at one router will cause similar delays at other routers.  The only thing to do is discard packets.

What you see in terms of the throughput of the network vs load is that throughput will climb very nicely, increasing, then it begins to flatten out as the capacity of the network is reached, then as congestion takes hold and the queues get longer, throughput starts to go down until it is just a trickle.  The network has collapsed.  The Internet did not see this coming. Nagel warned them in 1984 but they ignored it.  They were the Internet – what did someone from Ford Motor Company know?  It was a bit like the Frank Zappa song, “It can’t happen here.”  They will say (and have said) that because the ARPANET handled congestion control, they never noticed it could be a problem.  As more and more IP routers were added to the Internet, the ARPANET became a smaller and smaller part of the Internet as a whole and it no longer had sufficient influence to hold the congestion problem at bay.

This is an amazing admission. They shouldn’t have needed to see it happen to know that it could. Everyone else knew about it and had for well over a decade. CYCLADES had been doing research on the problem since the early 1970s.  The Internet’s inability to see problems before they occur is not unusual.  So far we have been lucky and Moore’s Law has bailed us out each time.

Emerson: Thank you – please, continue on about what Cyclades did that TCP should have done.

Day: The other thing that CYCLADES noticed about layers in networks was that they weren’t just modules and they realized this because they were looking at the whole network. They realized that layers in networks were more general because they used protocols to coordinate their actions in different computers. Layers were distributed share states with different scopes. Scope? Think of it as building with bricks. At the bottom, we use short bricks to set a foundation, protocols that go a short distance. On top of that are longer bricks, and on top of that longer yet. So what we have is the Physical and Data Link Layer have one scope; the Network and Transport Layers have a larger scope over multiple Data Link Layers. Quite soon, circa 1972, researchers started to think about networks of networks. The CYCLADES group realized that the Internet Transport Layer was a layer of greater scope yet it also operated over multiple networks. So by the mid-1970s, they were looking at a model that consisted of Physical and Data Link Layers of one small scope that is used to create networks with a Network Layer of greater scope, and an Internet Layer over multiple networks of greater scope yet. The Internet today has the model I described above for a network architecture of two scopes, not an internet of 3 scopes.

Why is this a problem? Because congestion control goes in that middle scope. Without that scope, the Internet group put congestion control in TCP, which is about the worse place to put it and thwarts any attempt to provide Quality of Service for voice and video, which must be done in the Network Layer and ultimately precipitated a completely unnecessary debate over net neutrality.

Emerson: Do you mean that a more sensible structure to handle network congestion would have made the issue of net neutrality beside the point? Can you say anything more about this? I’m assuming others besides you have pointed this out before?

Day: Yes, this is my point and I am not sure that anyone else has pointed it out, at least not clearly.  It is a little hard to see clearly when you’re “inside the Internet.”  There are several points of confusion in the net neutrality issue. One is that most non-technical people think that bandwidth is a measure of speed when it is more a measure of capacity.  Bits move at the speed of light (or close to it) and they don’t go any faster or slower. So bandwidth really isn’t a measure of speed. The only aspect of speed in bandwidth is how long it takes to move a fixed number of bits and whatever that is consumes capacity of a link. If a link has a capacity of 100Mb/sec and I send a movie at 50Mb/sec, I only have another 50Mb/sec I can use for other traffic. So to some extent, talk of a “fast lane” doesn’t make any sense. Again, bandwidth is a matter of capacity.

For example, you have probably heard the argument that Internet providers like Comcast and Verizon want “poor little” Netflix to pay for a higher speed, to pay for a faster lane. In fact, Comcast and Verizon are asking Netflix to pay for more capacity! Netflix uses the rhetoric of speed to wrap themselves in the flag of net neutrality for their own profit and to bank on the fact that most people don’t understand that bandwidth is capacity. Netflix is playing on people’s ignorance.

From the earliest days of the Net, providers have had an agreement that as long as the amount of traffic going between them is about the same in both directions they don’t charge each other. In a sense it would “all come out in the wash.” But if the traffic became lop-sided, if one was sending much more traffic into one than the other was sending the other way, then they would charge each other. This is just fair.  Suddenly, because movies consume a lot of capacity, Netflix is generating considerable load that wasn’t there before. This isn’t about blocking a single Verizon customer from getting his movie; this is about the 1000s of Verizon Customers all downloading movies at the same time and all of that capacity is being consumed at a point between Netflix’s network provider and Verizon.  It is even likely they didn’t have lines with that much capacity, so new ones had to be installed.  That is very expensive.  Verizon wants to charge Netflix or Netflix’s provider because the capacity moving from them to Verizon is now lop-sided by a lot.  This request is perfectly reasonable and it has nothing to do with the Internet being neutral. Here’s an analogy: imagine your neighbor suddenly installed an aluminum smelter in his home and was going to use 10,000 times more electricity than he use to.  He then tells the electric company that they have to install much higher capacity power lines to his house and provide all of that electricity and his monthly electric bill should not go up. I doubt the electric company would be convinced.

Net neutrality basically confuses two things: traffic engineering vs discriminating against certain sources of traffic. The confusion is created because of the flaws introduced fairly early and then what that forced the makers of Internet equipment to do to try to work around those flaws.  Internet applications don’t tell the network what kind of service they need from the Net.  So when customers demanded better quality for voice and video traffic, the providers had two basic choices: over provision their networks to run at about 20% efficiency (you can imagine how well that went over) or push the manufacturers of routers to provide better traffic engineering. Because of the problems in the Internet, about the only option open to manufacturers was for them to look deeper into the packet than just making sure they routed the packet to its destination.  However, looking deeper into a packet also means being able to tell who sent it. (If applications start encrypting everything, this will no longer work.)  This of course not only makes it possible to know which traffic needs special handling, but makes it tempting to slow down a competitor’s traffic.  Had the Net been properly structured to begin with (and in ways we knew about at the time), then these two things would be completely distinct: one would have been able to determine what kind of packet was being relayed without also learning who was sending it and net neutrality would only be about discriminating between different sources of data so that traffic engineering would not be part of the problem at all.

Of course, Comcast shouldn’t be allowed to slow down Skype traffic because it is in competition with Comcast’s phone service.  Or Netflix traffic that is in competition with its on-demand video service. But if Skype and Netflix are using more than ordinary amounts of capacity, then of course they should have to pay for it.

Emerson: That takes care of three of the five flaws in TCP. What about the next two?

Day: The next two are somewhat hard to explain to a lay audience but let me try. A Transport Protocol like TCP has two major functions: 1) make sure that all of the messages are received and put in order, and 2) don’t let the sender send so fast that the receiver has no place to put the data. Both of these require the sender and receiver to coordinate their behavior. This is often called feedback, where the receiver is feeding back information to the sender about what it should be doing. We could do this by having the sender send a message and the receiver send back a special message that indicates it was received (the “ack” we mentioned earlier) and to send another. However, this process is not very efficient. Instead, we like to have as many messages as possible ‘in flight’ between them, so they must be loosely synchronized. However, if an ack is lost, then the sender may conclude the messages were lost and re-transmit data unnecessarily. Or worse, the message telling the sender how much it can send might get lost. The sender is waiting to be told it can send more, while the receiver thinks it told the sender it could send more. This is called deadlock. In the early days of protocol development a lot of work was done to figure out what sequence of messages was necessary to achieve synchronization. Engineers working on TCP decided that a 3-way exchange of messages (3-way handshake) could be used at the beginning of a connection. This is what is currently taught in all of the textbooks. However, in 1978 Richard Watson made a startling discovery: the message exchange was not what achieved the synchronization. It was explicitly bounding three timers. The messages are basically irrelevant to the problem. I can’t tell you what an astounding result this is. It is an amazingly deep, fundamental result – Nobel Prize level! It not only yields a simpler protocol, but one that is more robust and more secure than TCP. Other protocols, notably the OSI Transport Protocol, incorporate Watson’s result but TCP only partially does and not the parts that improves security. We have also found this implies the bounds of what is networking. If an exchange of messages requires the bounding of these timers to work correctly, it is networking or interprocess communication. If they aren’t bounded, then it is merely a remote file transfer. Needless to say, simplicity, working well under harsh conditions (or robustness), and security are all hard to get too much of.

Addressing is even more subtle and its ramifications even greater. The simple view is that if we are to deliver a message in a network, we need to say where the message is going. It needs an address, just like when you mail a letter. While that is the basic problem to be solved, it gets a bit more complicated with computers. In the early days of telephones and even data communications, addressing was not a big deal. The telephones or terminals were merely assigned the names of the wire that connected them to the network. (This is sometimes referred to as “naming the interface.”) Until fairly recently, the last 4 digits of your phone number were the name of the wire between your phone and the telephone office (or exchange) where the wire came from. In data networks, this often was simply assigning numbers in the order the terminals were installed.

But addressing for a computer network is more like the problem in a computer operating system than in a telephone network. We first saw this difference in 1972. The ARPANET did addressing just like other early networks. IMP addresses were simply numbered in the order they were installed. A host address was an IMP port number, or the wire from the IMP to the host. (Had BBN give a lot of thought to addressing? Not really. After all this was an experimental network. The big question was, would it work at all!!?? Let alone could it do fancy things! Believe me, just getting a computer that had never been intended to talk to another computer to do that was a big job. Everyone knew that addressing issues were important, difficult to get right, so a little experience first would be good before we tackled them.) Heck, the maximum number of hosts was only 64 in those days.)

In 1972, Tinker AFB joined the ‘Net and wanted two connections to the ARPANET for redundancy! My boss told me this one morning, and I first said, ‘Great! Good ide . . . ‘ I didn’t finish it and instead, I said, O, cr*p! That won’t work! (It was a head slap moment!) 😉 And a half second after that said, ‘O, not a big deal, we are operating system guys, we have seen this before. We need to name the node.’

Why wouldn’t it work? If Tinker had two connections to the network, each one would have a different address because they connected to different IMPs. The host knows it can send on either interface, but the network doesn’t know it can deliver on either one. To the network, it looks like two different hosts. The network couldn’t know those two interfaces went to the same place. But as I said, the solution is simple: the address should name the node, not the interface.[2]

Just getting to the node is not enough. We need to get to an application on the node. So we need to name the applications we want to talk to as well. Moreover, we don’t want the name of the application to be tied to the computer it is on. We want to be able to move the application and still use the same name. In 1976, John Shoch put this into words as: application names indicate what you want to talk to; network addresses indicate where it is; and routes tell you how to get there.

The Internet still only has interface addresses. They have tried various work-arounds to solve not having two-thirds of what is necessary. But like many kludges, they only kind of work, as long as there aren’t too many hosts that need it. They don’t really scale. But worse, none of them achieve the huge simplification that naming the node does. These problems are as big a threat to the future of the Internet as the congestion control and security problems. And before you ask, no, IPv6 that you have heard so much about does nothing to solve them. Actually from our work, the problem IPv6 solves is a non-problem, if you have a well-formed architecture to begin with.

The biggest problem is router table size. Each router has to know where next to send a packet. For that it uses the address. However for years, the Internet continued to assign addresses in order. So unlike a letter where your local post office can look at the State or Country and know which direction to send it, the Internet addresses didn’t have that property. Hence, routers in the core of the ‘Net needed to know where every address went. As the Internet boom took off that table was growing exponentially and was exceeding 100K routes. (This table has to be searched on every packet.) Finally in the early 90s, they took steps to make IP addresses more like postal addresses. However, since they were interface addresses, they were structured to reflect what provider’s network they were associated with, i.e. the ISP becomes the State part of the address. If one has two interfaces on different providers, the problem above is not fixed. Actually, it needs a provider-independent address, which also has to be in the router table. Since even modest sized businesses want multiple connections to the ‘Net, there are a lot of places with this problem and router table size keeps getting bigger and bigger, now around 500K and 512K is an upper bound that we can go beyond, but it impairs adoption of IPv6 to do so. In the early 90s, there was a proposal[3] to name the node rather than the interface. But the IETF threw a temper tantrum refused to consider breaking with tradition. Had they done that it would have reduced router table size by a factor of between 3 and 4, so router table size would be closer to 150K. In addition, naming the interface only makes doing mobility a complex mess.

Emerson: I see – so, every new “fix” to make the Internet work more quickly and efficiently is only masking the fundamental underlying problems with the architecture itself. What is the last flaw in TCP you’d like to touch on before we wrap up?

Day: Well, I wouldn’t say ‘more quickly and efficiently.’ We have been throwing Moore’s Law at these problems: processors and memories have been getting faster and cheaper faster than the Internet problems have been growing, but that solution is becoming less effective. Actually, the Internet is becoming more complex and inefficient.

But as to your last question, another flaw with TCP is that it has a single message type rather than separating control and data. This not only leads to a more complex protocol but greater overhead. They will argue that being able to send acknowledgements with the data in return messages saved a lot of bandwidth. And they are right. It save about 35% bandwidth when using the most prevalent machine on the ’Net in the 1970s, but that behavior hasn’t been prevalent for 25 years. Today the savings are miniscule. Splitting IP from TCP required putting packet fragmentation in IP, which doesn’t work. But if they had merely separated control and data it would still work. TCP delivers an undifferentiated stream of bytes which means that applications have to figure out what is meaningful rather than delivering to a destination the same amount the sender asked TCP to send. This turns out to be what most Applications want. Also, TCP sequence numbers (to put the packets in order) are in units of bytes not messages. Not only does this mean they “roll-over” quickly, either putting an upper bound on TCP speed or forcing the use of an extended sequence number option which is more overhead. This also greatly complicates reassembling messages, since there is no requirement to re-transmit lost packets starting with the same sequence number.

Of the 4 protocols we could have chosen in the late 70s, TCP was (and remains) the worse choice, but they were spending many times more money than everyone else combined. As you know, he with the most money to spend wins. And the best part was that it wasn’t even their money.

Emerson: Finally, I wondered if you could briefly talk about RINA and how it could or should fix some of the flaws of TCP you discuss above? Pragmatically speaking, is it fairly unlikely that we’ll adopt RINA, even though it’s a more elegant and more efficient protocol than TCP/IP?

Day: Basically RINA picks up where we left off in the mid-70s and extends what we were seeing then but hadn’t quite recognized. What RINA has found is that all layers of the same functions they just are focused on different ranges of the problem space. So in our model there is one layer that repeats over different scopes. This by itself solves many of the existing problem of the current Internet, including those described here. But in addition, it is more secure as multihoming and mobility falls out for free. It solves the router table problem because the repeating structure allows the architecture to scale, etc.

I wish I had a dollar for every time someone has said (in effect), “gosh, you can’t replace the whole Internet.” There must be something in the water these days. They told us that we would never replace the phone company, but it didn’t stop us and we did.

I was at a high-powered meeting a few weeks ago in London that was concerned about the future direction of architecture. The IETF [Internet Engineering Task Force] representative was not optimistic. He said that within 5-10 years, the number of Internet devices in the London area would exceed the number of devices on the ‘Net today, and they had no idea how to do the routing so the routing tables would converge fast enough.

My message was somewhat more positive. I said, I have good news and bad news. The bad news is: the Internet has been fundamentally flawed from the start. The flaws are deep enough that either they can’t be fixed or the socio-political will is not there to fix them. (They are still convinced that not naming the node when they had the chance was the right decision.) The good news is: we know the answer and how to build it, and these routing problems are easily solved.

[1] An IMP was an ARPANET switch or today router. (It stood for Interface Message Processor, but is one of those acronyms where the definition is more important than what it stood for.) NCP was the Network Control Program, that managed the flows between applications such as Telnet, a terminal device driver protocol; and FTP, a File Transfer Protocol.

[2] It would be tempting to say “host” here rather than “node,” but one might have more than one node on a host. This is especially true today with Virtual Machines so popular, each one is a node. Actually, by the early 80s we had realized that naming the host was irrelevant to the problem.

[3] Actually, it wasn’t a proposal, it was already deployed in the routers and being widely used.

from typewriters to telematics, media noise in Robert Zend

I’ve recently started working on my next book project, at the moment titled “OTHER NETWORKS,” which will be a history of pre-Internet networks through artists’/writers’ experiments and interventions. My last book, Reading Writing Interfaces, begins and ends with a critique of Google and magic, or sleights-of-hand that disguise how closed our devices are by cleverly diverting our attention to seemingly breathtaking technological feats. And so the roots of “OTHER NETWORKS” come partly from my desire to continue thinking through the political consequences and the historical beginnings of “the Internet” as the technological feat of the late 20th and early 21st centuries which also, as another instance of the user-friendly, disguises the way in which it is a singular, homogenous space of distributed control.

Still, despite the continuity between Reading Writing Interfaces and “OTHER NETWORKS,” I am continually surprised by the way in which thoroughly print-based, analog writers also participated in telematic art/writing experiments (here I’m using ‘telematics’ for the process of long-distance transmission of computer-based information via telecommunications networks). For example, I’ve decided to begin my project by writing on early Canadian art/writing networks for Social Media: History and Poetics, an edited volume by Judy Malloy. Judy kindly directed me to Norman White’s “hearsay” from November 1985, which was a tribute to Canadian poet Robert Zend who had died a few months earlier. The project builds on the following text Zend wrote in 1975:

THE MESSAGE (FOR MARSHALL MCLUHAN)

THE MESSENGER ARRIVED OUT OF BREATH. THE DANCERS STOPPED THEIR  PIROUETTES, THE TORCHES LIGHTING UP THE PALACE WALLS FLICKERED FOR A MOMENT, THE HUBBUB AT THE BANQUET TABLE DIED DOWN, A ROASTED PIG’S NUCKLE FROZE IN MID-AIR IN A NOBLEMAN’S FINGERS, A GENERAL BEHIND THE PILLAR STOPPED FINGERING THE BOSOM OF THE MAID OF HONOUR. “WELL, WHAT IS IT, MAN?” ASKED THE KING, RISING REGALLY FROM HIS CHAIR. “WHERE DID YOU COME FROM? WHO SENT YOU? WHAT IS THE NEWS?” THEN AFTER A MOMENT, “ARE YOU WAITING FOR A REPLY? SPEAK UP MAN!” STILL SHORT OF BREATH, THE MESSENGER PULLED HIMSELF TOGETHER. HE LOOKED THE KING IN THE EYE AND GASPED: “YOUR MAJESTY, I AM NOT WAITING FOR A REPLY BECAUSE THERE IS NO MESSAGE BECAUSE NO ONE SENT ME. I JUST LIKE RUNNING.”

“hearsay” was an event based on the children’s game of “telephone” whereby a message – in this case, the text by Zend – is whispered from person to person and arrives back at its originator, usually hilariously garbled.  Zend’s text was “sent around the world in 24 hours, roughly following the sun, via a global computer network (I. P. Sharp Associates). Each of the eight participating centres was charged with translating the message into a different language before sending it on. The whole process was monitored at Toronto’s A-Space.” The final version, translated into English, arrived in Toronto as the following:

TO   HEAR

MESSENGER: PANTING.

THE DANCERS HAVE BEEN ORDERED TO DANCE, AND BURNING TORCHES WERE PLACED ON THE WALLS.

THE NOISY PARTY BECAME QUIET.

A ROASTING PIG TURNED OVER ON AN OPEN FLAME.

THE KING SAT CALMLY ON HIS FESTIVE CHAIR, HIS HAND ON A WOMAN’S BREAST.

IT APPEARED THAT HE WAS SITTING THROUGH A MARRIAGE CEREMONY.

THE KING ROSE FROM HIS SEAT AND ASKED THE MESSENGER WHAT IS TAKING PLACE AND WHY IS HE THERE? AND HE WANTED AN ANSWER.

THE MESSENGER, STILL PANTING, LOOKED AT THE KING AND REPLIED:
YOUR MAJESTY, THERE IS NO NEED FOR AN ANSWER. AFTER ALL,
NOTHING HAS HAPPENED. NO ONE SENT ME. I RISE ABOVE EVERYTHING.

Now, as it happens, I also just learned from a friend of mine about Zend’s incredible series of “typescapes,” ARBORMUNDI, published in 1982, seven years after writing “THE MESSAGE.” I wish I’d known about all of these works by Zend when I was working on Reading Writing Interfaces, as the third chapter is titled “Typewriter Concrete Poetry as Activist Media Poetics.” I delve into the era from the early 1960s to the mid-1970s in which poets, working heavily under the influence of McLuhan and before the widespread adoption of the personal computer, often deliberately court the media noise of the typewriter as a way to draw attention to the typewriter-as-interface. Similarly, like the low-level noise in “THE MESSAGE” and the high-level noise in “hearsay,” ARBORMUNDI elevates the noise of typewritten overlays, over-writing, into a delicate art. It’s appropriate, then, that the earliest (and perhaps first in the loose collection) typescape from 1978 is of the Uriburu: “mythological serpent – the symbol of the universe – which constantly renews itself by destroying itself.”

Robert Zend - Arbormundi 2

While the blurb on the back from the Sunday Star celebrates that Zend creates these typescapes with a manual typewriter, “no electronics, computers or glue involved,” he clearly had a McLuhanesque birds’ eye view of the entire, interconnected media-scape of the 70s and 80s, from typewriters to telematics.

Since ARBORMUNDI seems to be quite rare and I’ve only come across some nice images and beautiful close-readings on Camille Martin’s blog, I decided to scan the whole thing – available here and below. Enjoy!

Robert Zend’s ARBORMUNDI, Copyright © Janine Zend, 1982, all rights reserved, reproduced with permission from Janine Zend