Other Cybernetics for Other AIs

Below is the text for a talk I’m giving tomorrow on Friday September 19th in St. Pölten, Vienna at a wonderful event called “Reenacting Dartmouth,” organized by Seppo Gründler and Elisabeth Schimana. I’m also very keen to hear papers by my fellow panelists Xiaowei Wang and Xavier Nueno.

*

I am very grateful for this opportunity to extend the work I’ve been doing in and around media archaeology and the Media Archaeology Lab for the last 15 years or so to the pre-history of AI and early cybernetics. In fact, the particular constellation of thinkers and ideas that I touch on today had some influence on German media theorists such as Friedrich Kittler who in turn became foundational to media archaeology. So I am using this wonderful symposium, then, as an opportunity to indirectly perform, probably mostly for myself, some media archaeological excavations on media archaeology itself while I more directly use it to excavate some ideas and early critiques of AI we might have lost track of.

In particular, I have been intriguged with the idea that we could bring to bear a media archaeological way of thinking dedicated to excavating heterogeneity as well as a material-minded approach to the pre-history and early history of AI; and I’ve also been interested in seeing how this approach would allow us to then pinpoint moments that could have produced very different approaches to artificial intelligence. After picking away at this way of thinking with a colleague for the last year, I’ve come to understand two things that I will touch on today: first, that critiques of what eventually became the AI we’re living through now have existed for a very long time–they even long pre-date the coining of the term “artificial intelligence” in 1955. We need only to dig these critiques up to see that we have been gripped by cultural amnesia for almost the last two hundred years. And second, that indeed alternative versions of AI have also existed all along–they too need to be dug up, not to stage a return (as if that were even possible) but to prepare a move beyond where we currently are. Small, local, grounded, limited, materially specific, and ecologically minded versions of AI probably will probably never be profitable–but that’s not to say they aren’t possible.

My talk today also assumes we are all already familiar with the many deep and profound problems produced by today’s version of AI including its seemingly unstoppable, ever-expanding consumption of land, energy, resources; the way it threatens the possibility of meaningful work; and even the way it threatens the possibility of meaningful human existence. What continues to puzzle me are the very persistent and strange notions about what constitutes human intelligence driving all these aforementioned problems. These notions about intelligence are not simply reducible to marketing discourse designed to reap yet more profit–the discourse itself and even its appeal must have come from somewhere. I really don’t want to linger too long in the current bizarre state of things but, for the sake of doing due diligence and supporting my claims with examples, I can quickly point to OpenAI’s CEO Sam Altman who refers to their version of AI as “magic intelligence in the sky”; or I can point to longstanding futurist and popular AI shill Ray Kurzweil who claims that in twenty years AI will multi[ple our own intelligence “a millionfold.” Or, I can point to co-founder and CEO of Anthropic Dario Amodei who decided to perform what he calls a more measured version of AI advocacy by self-publishing an essay in October 2024 titled “Machines of Loving Grace: How AI Could Transform the World for the Better.” Even while claiming to simply offer “educated and useful guesses” about the future of AI, he still makes some astonishing assertions, based on little to no evidence, about just how powerful so-called “powerful AI” will be in the near future:

It will be smarter than a Nobel Prize winner across most relevant fields…This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc…It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary…It does not have a physical embodiment…We could summarize this as a “country of geniuses in a datacenter.”

As the philosopher Hubert Dreyfus pointed in one of the earliest critiques of AI from 1965, “Alchemy and Artificial Intellienge,” and then again in his 1971 book What Computers Can’t Do, philosophers, mathematicians, engineers, artists and writers have been fascinated with the idea of “thinking machines” since at least 13th century. In that period, Majorcan philosopher Ramon Llull developed what we now could call logical machines which combined religious and philosophical attributes using, for example, paper discs inscribed with alphabetical letters or symbols that referred to these lists of attributes and which could be rotated to generate different combinations that, in their totality, represented all possible truths about religion and philosophy. It was in a sense a demonstration of how to artificially reveal truth. A couple things that are noteworthy about this tiny bit of history: it turns out that the primary use for Llull’s logical machines was as a coercive debating tool for Christians who wanted to convert Muslims. Not surprisingly, Llull was rumored to have been stoned to death on a missionary trip to Tunisia, once again standing as a lesson in “well, yes you can do that but really, should you?” Still, a nearly endlessly long line of philosophers, mathematicians and engineers coming after Llull have mysteriously continued to believe the answer is “yes, you really should try to automate human thinking” and no amount of satire or critique has deterred them–from Jonathan Swift’s Lilliputian engine in Gulliver’s Travels to Ada Lovelace’s more circumspect comments from 1843 about how Charles Babbage’s Analytical Engine reminds us we still need “to guard against the possibility of exaggerated ideas that arise as to the powers” of the machine. She wrote quite pointedly:

In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable. The Analytical Engine has no pretensions whatever to originate any thing. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.

Again, no amount of critique seems capable of dismantling these assumptions about human intelligence that have existed probably even since the Ancient Greeks’ inauguration of western philosophy–that intelligence is something that can be abstracted from human bodies, from material reality, from culture and context, and therefore human intelligence is also something that can be mechanized or turned into a set of procedures. Even the creator of the first chatbot, Joseph Weizenbaum, called the attempts to “build a machine on the model of man, a robot that is to have its childhood, to learn language as a child does, to gain its knowledge of the world by sensing the world through its own organs, and ultimately to contemplate the whole domain of human thought…” a “most grandiose fantasy.” He writes,

Man faces problems no machine could possibly be made to face. Man is not a machine…although man most certainly processes information, he does not necessarily process it in the way computers do. Computers and men are not species of the same genus. (202-203)

No matter how sensible Wiezenbaum’s critique seems to us today, it still did nothing to dissuade John McCarthy, the computer scientist who originally coined the term ‘Artificial Intelligence,’ as he wrote in the opening lines of a scathing review of Weizenbaum’s book, that “The idea of a stored program computer leads immediately to studying mental processes as abstract computer programs. Artificial intelligence treats problem solving mechanisms non-biologically,” which is simply, he goes on to say, a “rationalist world view” that Weizenbaum “fears” (“An Unreasonable Book”). In hindsight, and largely thanks to the field of STS, we can now clearly see how fully McCarthy and all those who have come after him have been completely seduced by the ideology of this “rationalist world view.” And, given how we know this worldview is actually a powerful ideology, we can also now clearly see how the dogged pursuit in the 21st century (not so much 13th through 18th centuries) to realize the factually impossible and even undesirable complete mechanization of human brain power by way of AI is probably just highly profitable (for a select few) and nothing more.

My point with this very brief overview of a very long history of humans fantasizing about the possibility of a world that is somehow magically and impossibly untethered from the actual material world itself and then the castigation of those who are excluded from or who are disinterested in such aspirations is merely to remind us that this history of attempts, proposals, and then critique EXISTS. And that whether the tools at hand for trying to perform these magical acts are symbols, rotating paper circles, radio or telephone relays, or computer chips, the story of a push and pull between ungrounded fantasy driven by abstraction and grounded reality driven by material matters of fact is still the same.

Still, while we can find examples of thinking about human brains as abstract mechanisms that are identical to and pre-date those proposed by the founders of the field of AI, it’s commonly understood that the term and the field of study known as ‘Artificial Intelligence’ emerged directly out of cybernetics in 1955. The same John McCarthy I mentioned earlier reportedly came up with the term as part of the proposal for the Dartmouth Summer Research Project on Artificial Intelligence as a way to distinguish this group of engineers’ approach from the much more interdisciplinary collection of people who identified with cybernetics and to exclude the apparently insufferable Norbert Wiener who is still considered the founder of cybernetics (“Review of The Question of Artificial Intelligence“). Personalities aside, the exclusion of Wiener by McCarthy makes some sense given that Wiener went to some pains to clarify his thinking in his 1948 book Cybernetics by publishing The Human Use of Human Beings in 1950 where he very explicitly warns against insisting too strongly on seeing the human brain “as a glorified digital machine” (65) and also somewhat surprisingly reminds us of the very problematic labor dynamics involved in using “automatic machines”; uncannily anticipating the current state of things, he writes,, “Any labor which competes with slave labor must accept the economic conditions of slave labor…Thus, the new industrial revolution is a two-edged sword. It may be used for the benefit of humanity but only if humanity survives long enough to enter a period in which such a benefit is possible” (162). 

Surprisingly, even with these powerful moments in early cybernetics, right now I’m aware of only a handful of scholars digging up these alternate histories of AI that are embedded in works by Wiener and others. One exception is Matteo Pasquinelli who does briefly delve into an alternative history of AI by looking at a 1940s-era debate about how perception takes place that was staged between what he characterizes as the mechanistic cyberneticians and the holist members of the Gestalt school. A few other exceptions are the philosopher of technology Yuk Hui and the STS scholar Andrew Pickering.

Over the past five years or so, Hui has been advocating for a return to cybernetics because of the way it “laid an epistemological foundation for modern automation” (“Introduction” 15) based not on a mechanistic view of machines but rather one based on “circular causality” that makes possible what he calls a “political economy of machines” centered around “technodiversity” (“Machine and Ecology” 43). Quite in opposition to those who seem intent on accelerating the destruction of the planet and its inhabitants because they have been seduced by the allure of limitlessness promised by AI (right along with a long list of biohacking tips peddled by longevity gurus), Hui advocates for technodiversity because, he writes, it is “fundamentally a question of locality. Locality [means]…the capacity to reflect on the technological becoming of the local…for multiple localities to invent their own technological thought and future” (61). Hui’s point of view is similarly echoed by Emily M. Bender and Alex Hanna, the authors of The AI Con which came out just a few months ago, where they preface their critique by saying that the term “AI” has always obscured how diverse these techniques of automation have actually always been. But Hui’s call for “technodiversity” does more than try to remind us of how diverse AI has always been–it’s a call to draw on cybernetics as a tool to reimagine our present and future relationship with machines, with each other, and with the natural world. While there is a lot of philosophical speculation in Hui’s work and little specific engagement with particular cybernetic thinkers beyond Wiener, I still hope scholars continue to produce work in this vein, perhaps augmented with the material-mindedness of media archaeology so that we move beyond discourse and into actual instances of technodiversity. For example, what particular pieces of technology, either real or imagined, exemplify the sort of “redefinition of the relation between machines and ecology” that Hui believes was set in motion by a certain strain of cybernetic thinking? (“Machine and Ecology” 49)

While Andrew Pickering’s philosophical touchstones and vocabulary are very different from Hui’s, he too writes compellingly on “sketches of another future” he finds in British cybernetics. For example: whenever we give thumbnail sketches of cybernetics we often say that it was interested in communication and control, assuming that “control” implies they were invested in a top-down structure of command over humans, animals, machines, the environment, and so on. But Pickering makes it clear that British cyberneticists such as Ross Ashby, Stafford Beer, and Gregory Bateson were actually invested in distributed control across both organic and inorganic systems and that “control” in this context actually meant something like an understanding of how adaptation takes place and how it supercedes boundaries dividing organic and inorganic life. For Pickering, this particular version of cybernetics models “a way of acting with nature, accommodating ourselves to it and going along with it, in sharp contrast to the linear acting on the world which has got us into so much trouble in the Anthropocene.” (“Cybernetics in Britain” 121)  

These interventions I’ve just touched on by Pasquinelli, Hui, and Pickering which return us to cybernetics and the moment where AI could have taken a very different turn do a lot of work to move us farther along in imagining other possible presents and futures: ones that are wildly diverse and heterogeneous, ones whose design reflects an understanding of brains and the activity of thinking as deeply entangled with the organic as much as the inorganic and thus also inherently bounded, limited, and terrestrial. Another way of putting it is that their work paves a very different path than the current claims I have already touched on, claims about how–thanks to the continual expansion of sprawling data centers, unsustainable energy consumption practices, and the relinquishment of responsibility for decisions about things ranging from the mundane to decisions about who lives and who dies–we have supposedly arrived at a version of artificial intelligence that will soon exceed human cognitive abilities.

More, one can even find yet more alternative AI histories by returning to one of the founding fathers of AI, Alan Turing. For example, most of us know Turing as the creator of the Turing test which was designed to ascertain whether machines can think. But I was surprised to discover that Turing was a frequent visitor to Wittgenstein’s lectures between 1933 and 1934 which later became the Blue and Brown books and in which Wittgenstein critiques western philosophy’s “craving for generality” and its “contemptuous attitude towards the particular case” as exemplified by what he saw as the nonsensical question, “is it possible for a machine to think?” He continues, “It is as though we had asked, ‘Has the number 3 a colour?'” Suddenly the opening lines of Turing’s famous piece from 1950, “Computing Machinery and Intelligence”, take on a very different meaning as he clearly echoes Wittgenstein’s sense that the question “Can machines think”? cannot possibly produce a meaningful answer (1). He even writes that the question is “too meaningless to deserve discussion” (8). Instead, he continues, we have to ask ourselves whether a machine can “imitate” a human and therefore deceive them into thinking the machine is a human. I assume that knowing one of the founders of AI thought of the pursuit more in terms of an elaborate magic trick wouldn’t dissuade today’s AI advocates but still, this line of critique that runs from Wittgenstein through Turing surely paints the endeavour in a very different light. Moreover, the fact that Turing was also a frequent visitor to the Ratio Club–a British gathering of biologists and engineers interested in cybernetics and which included Ross Ashby–ought to significantly trouble our sense of how and why AI came to be.

Just to complicate matters even more, similar to common misreadings of Turing, despite Pickering’s straightforward claim about how British cybernetics represents an alternative approach to cybernetics than the one we’ve learned to associate with top-down control and a rationalist world view, at first glance Ashby’s 1952 Design for a Brain does seem to lay the groundwork for today’s disembodied version of AI as he opens the book by saying it represents his interest in “mechanistic explanations for adaptive behavior,” seeking, as he put it, “a logic of pure mechanism” that, if successful, will produce “a specification for building an artificial brain.” (10) Without reading an article of his from 1948 with the same title, “Design for a Brain,” one would never know that he also believed that, unlike the engineer’s approach to the human brain, for the biologist “the human brain is not a thinking machine, it is an acting machine” so that the brain for Ashby is ultimately an “embodied organ, intrinsically tied into bodily performances” (Pickering 6).

Stafford Beer’s work also presents the same problems as Turing’s and Ashby’s, for while most of his 1959 Cybernetics and Management does seem to concern itself with applying models of a “thinking machine” to the governmental or corporate management of humans, it is not until the end of the book that he makes it clear that even though cybernetic machines dismantle clear divisions between humans, animals, machines, and the environment and institute a distributed model of control, all of this

would be meaningless to someone who had not followed through the development of our argument. So the kind of phrase that usually gets into print in condensed accounts is ‘thinking machine’, making it possible for someone to say, ‘so there can be thinking machines. From this we infer that i) humans beings are really just machines after all, that ii) we move into an epoch in which no human activity will be outside the range of simulation…these ‘developments’ of cybernetic thinking are huge absurdities: They breed in the muck of language.’ (207).    

To close, in my own close readings of these cybernetic thinkers along with works related to the pre-history of AI, I’ve noticed that there is a wonderfully divergent and, again, even contradictory range of ideas across their work and sometimes even within a single work. It turns out it’s diversity and hetergeneity all the way down and this probably explains why it still seems to be possible to follow a path of breadcrumbs from early cybernetics to contemporary AI while ignoring all the claims and findings that would have produced a very different contemporary moment than the one we’re living through.

Sources

Amodei, Dario. “Machines of Loving Grace: How AI Could Transform the World for the Better.” October 2024.

Ashby, Ross. Design for a Brain (London: Chapman & Hall, 1952).

—. “Design for a Brain.” Electronic Engineering 20 (December 1948): 382-83.

Beer, Stafford. Cybernetics and Management (New York: Wiley, 1959).

Fauria, Krysta. “AI Apocalypse? Why Language Surrounding Tech is Sounding Increasingly Religious.” AP News 29 August 2025.

Hui, Yuk. “Introduction.” Cybernetics for the 21st Century. Ed. Yuk Hui (Hong Kong: Hanart Press, 2024).

—. “Machine and Ecology.” Cybernetics for the 21st Century. Ed. Yuk Hui (Hong Kong: Hanart Press, 2024).

Lovelace, Ada. Ada, The Enchantress of Numbers. Ed. Betty Alexandra Toole (Strawberry Press, Mill Valley, CA).

McCarthy, John. “An Unreasonable Book.” Physics Today (1976).

—“Review of The Question of Artificial Intelligence.” Annals of the History of Computing (1989).

McCulloch, Warren and Walter Pitts. “A Logical Calculus of the Ideas Immanent in Nervous Activity.” The Bulletin of Mathematical Biophysics 5:4 (1943).

Pasquinelli, Matteo. The Eye of the Master: A Social History of Artifical Intelligence (London and New York: Verso, 2023).

Pickering, Andrew. The Cybernetic Brain: Sketches of Another Future (Chicago: University of Chicago Press, 2010).

—. “Cybernetics in Britain.” Cybernetics for the 21st Century. Ed. Yuk Hui (Hong Kong: Hanart Press, 2024).

Weizenbaum, Joseph. Computer Power and Human Reason (New York: W.H. Freeman and Company, 1976).