Društvo LJUDMILA
Rozmanova ulica 12
1000 Ljubljana
Slovenia
Prostori: osmo/za

Digital Dish 2024 Eryk Salvaggio

All this talk of generative AI has only driven me deeper into the annals of early computing, with its amassments of punch cards and reel-to-reel tape. I’m not seeking some balm of nostalgia from the era of simpler machines. Instead, I find myself drawn to the ideologies that surround them, particularly models of interaction that have since been lost or ignored. Models for interacting with what we call AI are not as modern as we think. Gordon Pask’s conversation theory examined “black box” problems in 1960,[1] and the rise of ChatGPT offers time for a re-engagement of Pask’s work. Some biographical notes might be in order. Gordon Pask was a British cybernetician born in 1928 in Derby, England. Pask was fundamentally involved in probing the depths of human-machine interaction and the processes of social learning. Pask obtained a degree in Natural Sciences, specializing in experimental psychology from Cambridge University. He later received a Ph.D. in the psychology of learning and the principles of control from London University. Much of Pask's work might today be considered in the canon of technological artworks: he designed "Musicolour" in the 1950s, an interactive installation that translated the improvisations of musicians into responsive light patterns — notably, using these lights to influence and encourage improvisation in human performers. Perhaps Pask's most significant contribution was the development of Conversation Theory in the 1970s, which provided a framework for understanding learning and communication processes. That Pask is not broadly known to those beyond a network of cyberneticists is no mystery. He wrote a lot, and much of it was difficult: Lacanian diagrams, widely multi-disciplinary references, and a precise but idiosyncratic vocabulary make the work difficult for those with a passing curiosity to settle into. This may be compounded by accounts of delightful but disorienting interactions where the listener “was never quite clear whether he was dealing in poetry, science, or humor.”[2] Nonetheless, time spent with Pask’s writing reveals something prescient, not only for how it understands and challenges our mythologies of Large Language Models and generative AI but because Pask’s model for understanding these systems is akin to the role of “creative technologists” with regards to defining relationships with new technologies. Pask developed these technologies in the framework of “maverick machines,” intended to challenge the orthodoxy of computation in his day, but also to develop new narratives and design paths through prototypes. In this paper, I’ll offer two Paskian frameworks: his approach to black box systems, and his distinctions between communication and conversation. Then we’ll look at this history of maverick machines as an origin story for creative technologists that defines the designer or artist as a provocateur, rather than a product tester. If most histories of digital or computer arts emerge from the experimental think tanks of Bell Labs or the promotions team at Google, Pask is notable for being a hacker: rather than making work that celebrated corporate tech, his work re-imagined what technology could be. In so doing, his work served as a form of critique when compared to surrounding norms. That is even more true today.

Pask In the Box

If our aim is to evaluate Pask in light of generative AI, his 1960 paper, “The Natural History of Networks,”[3] is a prescient starting point. Delivered as part of the Self-Organizing Systems conference of the Office of Naval Research, Pask’s remarks were focused on understanding black-box systems from the position of a “natural historian,” one who understands “the art of the rabbit run, almost by living the part of a rabbit” (232). Pask begins by defining systems: “Any pattern of activity in a network, regarded as consistent by some observer, is a system” (233). Specialized expertise defines the boundaries of those systems - a physicist and a biologist might look at the same systems but see completely different operations. Such “reference frames” suggest that most knowledge of a system is limited to knowledge of its sub-systems, rather than their interactions. Pask suggests an alternative: that “systems may also be controlled by interacting with them,” which appears slightly off to the specialist. Pask compares it to goading an elephant forward with food. If a specialist is merely interested in the motion of elephants, they may wonder why we would feed them. Researchers studying the entirety of a system’s behaviors appear from the outside to be “skipping illogically from one frame to another” (234). For Pask, that’s how we discover the “elephant system.” If we want to understand the full elephant, we observe responses from trunk to tail. These interactions are rarely an input-output equation. Instead, the elephant might eat, might not, might eat and spit out rotten food, and so on. Behaviors of a system are not fixed laws defined by strict flows, but striations open to variation and possibility. We can apply Pask’s work to the elephant systems of contemporary text and image generation, such as Diffusion, GANs, or Large Language Models. These are distinct systems, but they share some behaviors: chance and prediction interact within an opaque model to make decisions. For example, how a Large Language Model determines that the "apple" in "He ate an apple" is a fruit and not a tech company remains unclear. But this is comparable to looking at the tail of a dog and saying we do not know why it wags — while we ignore its caretaker pulling into the driveway. They are black boxes, but only at certain scales. If we look closely for specific mechanisms in the network that inform or shape their behavior, we find mysteries. If we step back, however, the black box of generative AI seems overstated. We know that the apocryphal apple is not a tech company because we do not eat tech companies. We seem compelled to seek logic where there is essentially randomness. Take, for instance, when an LLM completes a sentence, such as "She looked at the weather and stayed inside to avoid the ______" It could finish with "snow," "rain," or "hail." Statistically speaking, each option is equally likely. We might see "rain" more frequently than "hail," but that doesn't mean we should be surprised when "hail" is chosen. The selection processes of such models can best be described as randomness directed by loose probabilities, that is, stochastic. Trying to find a specific logic in the random assignments of new words to previous strings overcomplicates a coin toss. To understand a closed system, it may be better to orient ourselves as observers, rather than seeking explanations of specific mechanisms that are off-limits to human eyes or logic. Pask would later describe black boxes as a framing problem, rather than a technical one, which results when “the experimenter believes that all features of the system that are relevant to his inquiry can be assembled, in principle, and reduced to expressions in a single frame of reference”[4]. Taking cues from Pask, we might approach Large Language Models as intricate networks of intertwined social and technical components, a web of decisions and interactions. While zeroing in on specific decision-making processes might be useful for machine learning engineers, it deviates from the holistic perspective encouraged by cybernetics. Instead, cybernetics encourages us to shift our attention from individual gears to study, instead, what it is that emerges from constellations of moving parts.

Communication vs Conversation
Pask’s 1980 paper, “The Limits of Togetherness,”[5] again speaks in terms that we might apply directly to today’s LLMs: Pask distinguishes between communication and conversation. For Pask, communication is “the transmission and transformation of signals.” Such communication, Pask argues, is key to having conversations, but even bad communication can facilitate good conversations. Pask sees conversation, simply defined, as “concept sharing,”: “In communication. information transfer is founded upon selection, albeit statistical, amongst states of transmitters and receivers that are extra-theoretically specified as independent ... but synchronized, for example, by a recognizable punctuating symbol” (999). ChatGPT is guided by question marks and full stops: its goal is to pass off text as believable. Human conversation relies on these markers to make sense of shared concepts, but the goal is to come to a mutual understanding. The goals and functions of communication and conversation are different. Consider a question posed by a user to ChatGPT. It is designed as a chatbot, implying a kind of multidirectional conversation. In Paskian terms, however, what we have is a communication channel, a place where signals are exchanged. We pose a question, and the set of words in our query is analyzed and extrapolated into a response. ChatGPT is never answering a question. It is extending our question in the shape of an answer: studying the query for keywords that it can extrapolate into a convincing response. It is a purely statistical operation. Pask would contrast this with human conversation in that ChatGPT cannot share concepts with us. It can only communicate signals: which words are most likely to follow others, distinguished with punctuation marks into an approximation of language. We can make sense of this, but it is not a Paskian conversation. Pask writes, “The algebraic constraints which give "machinehood" to the hardware are designed to prohibit conversation with machines. For instance, you cannot, by definition, "disagree" with a machine; you can only say it is a "broken" machine” (1000). Paul Pangaro, a student of Pask, contrasts this brokenness with conversational disagreement, framing Pask’s work within the idea of personalization:

“If I were sitting in front of you now in a human-to-human conversation and responding to your question, should I give you the same answer I would give anyone else? Should I give you the same answer no matter what the previous question was? Shall I just ignore what I have observed in our previous conversations about what is an effective conversation with you? Though they behave as poorly as this, we call today's machines "personal computers". That is of course a lie; computers of standard commercial fare are merely impersonal ones” (2001)[6].

Two decades of data centers and processing power later, we’ve shifted from “personal computers” to “artificial intelligence,” but the limits — and this sentiment — remain. These are systems that respond to information, but do not respond in meaningful ways to situations, and certainly do not adapt based on any shared discussion of concepts. It’s not just machines, either. Pask sees plenty of these kinds of interactions in the contemporary Information Environment. In fact, he defined the challenge of our day not as the problem of communication, which has effectively been solved: we have accessible long-distance networks and real-time data access. Instead, he sees our challenge as that of preserving the features of conversation in a world of “excessive togetherness,” in which we are surrounded by “communication which looks like conversation but is not at all conversational.” Contrast this to consciousness sharing, in which we operate as knowingly subjective observers of our world, sharing that subjectivity with others. This problem lingers in our technologies, even as the tools to communicate are saturating our landscape. It is not strictly a technological problem. Pask describes the work of big committees: entities that do not decide, but cluster decisions made by small groups within them. “The larger consensus amounts to the distribution of blame, a ‘committee decision’ for which no one is responsible” (1001). Pask might easily be speaking to our deference to algorithmic authority. Artificial intelligence replicates the role of a “committee” quite effectively: it communicates but does not converse. We have datasets that inform decisions in solitude — not even the small, closed-room cabal of party leaders or management consultants, but a single point of analysis drawn from data at scales no single human could process. Algorithms impose challenges on decision-making in that they communicate decisions but strip accountability from the humans who follow them and offer no transparency into how such decisions are made. No algorithm converses with the plaintiff seeking bail; nor does it ask loan seekers about their credit history. Instead, signals are inscribed into interfaces and compared against datasets: “snapshots of amplifying, self-replicating, and self-stabilizing processes that grow and stereotype by entirely systemic mechanisms” (1002). Communication occurs, but conversation is absent. Pask might suggest that conversational technologies are possible, but they would look nothing like today’s GPTs or Diffusion systems. Testing this hypothesis was a driver of the systems he built. Pask was interested in the “conservation of consciousness,” which is not to be confused with the “sentience” or “consciousness often attributed to AI, or predicted to emerge in immediate or long-term visions of the future. For Pask, conversation was a technique for the conservation of a consciousness that emerges between participants. Paul Pangaro describes Pask’s theory of conversation as when:

“...our shared awareness is persistent, in that it carries forward in time, even when our current exchange is over. We retain the experience of the conversation in that our mental processes have been changed as a result of the conversation, and we carry those changes with us. While each of us may carry forward somewhat different concepts, they are grounded in our agreements. So, consciousness is conserved. Put another way, and as Pask loved to say: Once begun, conversations never end.”[7]

ChatGPT does not remember us and does not change to meet us. The conversation ends when we log off. It is not a conversational system: it communicates without consciousness. Pask would have grasped this immediately. But he might also have avoided declaring this a technological condition. Communication without conversation is a human behavior, too: simply put, it happens when we can hear but don’t listen.

The Creative Technologist

Pask’s frameworks for human-computer interaction stand on their own. But Pask is particularly interesting to me as an artist bridging questions about ethics and generative AI. Pask was perhaps the closest we have to a prototypical creative technologist in the field of cybernetics, as he was working to develop alternative conceptual and technical models for computer systems. Pask defined himself as a “mechanic philosopher,” and was interested in the production of “maverick machines” as a way of structuring and advancing the hypotheses described above. At the heart of his practice was posing a theory and then building a machine that would allow him (or users) to investigate it. Of maverick machines, Pask clarified a definition in 1982: “A computer that issues a rate demand for nil dollars and nil cents (and a notice to appear in court if you do not pay immediately) is not a maverick machine. It is a respectable and badly programmed computer... Mavericks are machines that embody theoretical principles or technical inventions which deviate from the mainstream of computer development, but are nevertheless of value.”[8] While Pask is known mostly for his educational machines, artistic pieces fill out the portfolio. Cybernetic Theater, from 1964, proposed a system to facilitate interactive performances where the audience offered real-time feedback to actors. This feedback loop would create a meta-dialogue, where performers and audiences would collaborate on the piece through a mediated conversation. Colloquy of Mobiles is perhaps the most notable of Pask’s creative technology projects, particularly as it was brought into conversation with contemporary robotics and AI through a 2018 revival of Jasia Reichardt’s 1968 “Cybernetic Serendipity” exhibition, where it was originally shown. [9] Colloquy attempted to translate aspects of conversation theory into the interactions of machine sensors, which circled each other in a dance of seeking, accepting and rejecting advances by other sensors. Humans mediated these spaces, moved by the movements of the machines, able to interfere or facilitate as they liked. But the first of these machines was Musicolour, prototyped in 1953. It was a precursor to today’s light shows or interactive music visualizations. The goal of the system was to model “conversation” between humans and a machine in interaction. This is the frame that much of Pask’s work would draw on: “exploring what it might mean to enable conversations between humans and machines or between humans through machines.”[10] Musicolour had a series of microphones and filters that could detect the frequency range and other properties of sound performed by a musician. These measurements were averaged, and the mean values were used to control the values of lights and the motions of a color wheel.

“Performers were able to accentuate properties of the music and reinforce audio-visual correlations that they liked (for example, high notes with a particular visual pattern). Once performers became familiar with the filter-value selection strategy of the machine, they were able to establish time-dependent patterns in the system and reinforce correlations between groups of musical properties. It is important to note that there were no fixed mappings between sounds and lights: these were developed through the interaction of the musicians with Musicolour.” (Bird, Di Paolo 2008, 193)

Performers could learn the system, but so could the system learn the performer. This dynamic created situations in which performers were encouraged to diversify their performances through improvisation, in order to keep the system lively and engaging. The system was deployed in a handful of nightclubs but never gained mainstream success, and Pask shelved it in 1957. The commercial success of Musicolour is less interesting to me than the proposition of the system as an artistic conversation and the contrast that we can make between Musicolour as a technical, creative system and the relationships we have with generative systems today. Arguably, Musicolour’s assemblage of circuits is far more conversational than today’s generative systems, be it GPT4 or Midjourney. Pask’s Musicolour may have been a model for such “conversation” as an investigation into the nature of black boxes. The performer plays music while probing the machine’s responses. Musicolour is a black box, at least to most performers, but music is the mechanism for probing this relationship. The music prompts a relationship, the relationship triggers certain effects, and the performer calibrates to those effects. Both contribute meaningfully to this conversation: “The performer’s present emotional experience (of the performance), their cognition/thoughts (prompted by machine response), and their history (experiences rehearsing the musical piece) decide the nature of the resulting performance. Personal and situational factors guiding accumulation of experience mediate exteriorized action.”[11] Compare this to our interactions with ChatGPT or Midjourney. While what each tool produces in response to our prompt may vary, the system itself remains unmoved. This is not to imply that they ought to be, but simply to assert that claims to the contrary are flatly false. GPT 3, for example, relied on a stable set of training data that was collected in 2021. No interaction with the system cultivates any deeper sense of “awareness” or “consciousness” in the chatbot. The system does not learn from its interactions: it continues to produce text through a similar stochastic process. Midjourney, likewise, may offer four responses to a user’s prompt in the form of images. But at no point does the system suddenly come to anticipate taste preferences of any user, unless the prompt itself is modified. This is communication: the user sends signals to be interpreted and decoded. It is not a conversation, as the system has no record of any transformation lingering in its neural net afterward. Musicolour is at once a proposition about conversational technologies and an experiment in interpreting the black box of a system as an object of interaction. It’s this idea of designing tools to provoke a conversational relationship that is particularly useful in today’s information environment. Bird and Di Paolo point out that this view of Pask is a radical departure from objective science, where the presence of observational bias is “often treated as a problem we would wish to minimize if we cannot eliminate,” while Pask’s approach acknowledged “that we have limited, incomplete knowledge about many systems we want to understand [and] ... by interacting with these systems we can constrain them, and ourselves, and develop a stable interaction that is amenable to analysis (204).” They suggest that Pask viewed interactive technology not merely as a model or representation of the observable world, but as a model to be observed, “a scientific problem in itself” (205). I view this as a way of reversing key questions about computers and autonomous systems, from asking the typically engineering-framed question of “what can machines do with people?” to emphasize the more sociologically rooted question of “what can people do with machines?” Pask, as an artist-engineer, was extraordinarily mindful of the presence of the observer and their influence on observed systems. Arguably, designing systems for exchange and interrogation — “maverick machines” that framed alternative possibilities for computation — can be seen among the work of artists, activists, and researchers engaged in critical relationships to dominant ideologies of algorithms. Borrowing Pask’s lens of alternative pathways are projects such as Tega Brain’s “Solar Protocol” (2020) which is a self-organizing network where solar-powered host nodes are activated according to the strength of sunlight, or Caroline Sinders’ “Feminist Dataset” (2017) which re-imagines a complete data pipeline for AI training following feminist principles. Works like Lauren Lee McCarthy’s LAUREN, meanwhile, are mindful of creating contexts for learning from a system by becoming part of that system. LAUREN centers McCarthy as the receiving end of a chatbot, creating a system for delivering instructions that fit definitions of “maverick machines” but also pushes forward Pask’s definition of “conversation” as potentially dialogic — but ultimately cut short through an abundance of togetherness, as LAUREN is ever-present and reduced to the role of servant. I would not suggest that these artists have a direct debt to Pask. Instead, I would suggest that Pask had articulated a number of positions and frameworks that are being rediscovered and articulated today and that perhaps we might apply some of these frameworks to the contemporary analysis of artificial intelligence, how we interpret and understand it, and how we position ourselves as observers in relationship with it.

Paskian AI
Generative AI is an acceleration of data collection and automated analysis that fits into the Paskian vision of information systems he articulated in the early 1980s. Black box systems seem to be constantly deployed, without proper testing or concern for impacts. A case in point is automated surveillance systems, which consistently show biases against people of color, but are nonetheless funded and put into place in large cities across the world. [12] On the other end of this surveillance, similar patterns in training data are used to train generative AI image systems, creating images which are inevitably biased toward stereotypes found online. [13] Meanwhile, we are awash in bizarre claims that statistical probability engines are developing sentience or self-awareness, as in the case of Blake LaMoine, a Google engineer convinced that the LLM he was testing was conscious. [14]

There are no simple solutions to navigating our way through these confusions, but I propose that Pask’s approach to black boxes, combined with conversation theory, offers at least one strategy. Pask proposes that we observe ourselves in interaction with systems, with the intent of observing changes as we move from micro to macro views. Cybernetics, as a study of interrelated complex systems and their interactions, offers us some tools. But Pask makes clear that there may be paradoxes and contradictions within systems when viewed from improper distances. Move closer, or further away, and these tensions begin to make sense as interactions between subsystems. Likewise, this “movement of lenses” is not just metaphorical, but demands a multi-disciplinary, trans-experiential approach to viewing the system: ideally one that takes not just the perspective of one’s expertise but a diversity of lived experience into consideration. What I see you may not see, but we might see it together.

Though Pask aimed for a precise science of conversation, I suspect the benefit of this thinking is the acknowledgment of subjectivity. There would be no illusions of a bias-free system under Pask’s vision of AI: it is entirely biased, and reducing or eliminating bias is folly. This is a much better starting point than AI’s engineering mindset, that social position can be erased or obscured if only the system is calibrated enough. Instead, we may enter into conversation with the black boxes: interact with the systems we wish to understand, placing our own subjectivities into these interactions. The maverick machines of Gordon Pask were opportunities for us to navigate these loops of subjectivity and conversation. To reflect on what we bring, observe what our presence changes, calibrate to that response, and try anew.



 [1] Pask G. (1975). Conversation, cognition, and learning. Elsevier.

[2] Bird J., Di Paolo E. (2008). Gordon Pask and his maverick machines. In Husbands P., Holland O., Wheeler M. (Eds.), The mechanical mind in history (pp. 185–211). MIT Press. http://users.sussex.ac.uk/~ezequiel/Husbands_08_Ch08_185-212.pdf [3] Pask G. (1959b). The natural history of networks. Proceedings of International Tracts in Computer Science and Technology and Their Application, 2, 232–263. https://www.pangaro.com/pask/Pask-1960-TheNaturalHistoryofNetworks.pdf [4] Pask G. (1966). Comments on the cybernetics of ethical, psychological and sociological systems. In Schade J. P. (Ed.), Progress in biocybernetics (Vol. 3, pp. 158–250). Elsevier. [5] Pask G. (1980). The limits of togetherness. In Lavington S. H. (Ed.), Information processing. Proceedings of IFIP’80 (pp. 999–1012). North Rolland. https://www.pangaro.com/pask/pask%20limits%20of%20togetherness.pdf [6] Pangaro, Paul. "THOUGHTSTICKER 1986: A Personal History of Conversation Theory in Software, and its Progenitor, Gordon Pask." Kybernetes 30.5 (2001): 790-807. ProQuest. Web. 13 Oct. 2023. [7] Pangaro, Paul. "Questions for Conversation Theory Or Conversation Theory in One Hour." Kybernetes 46.9 (2017): 1578-87. ProQuest. Web. 13 Oct. 2023. [8] Pask, Gordon, and S. Curran. 1982a. Microman: Computers and the Evolution of Consciousness. New York: Macmillan. [9] Pangaro P., McLeish T. J. (2018, April). Colloquy of mobiles 2018 project. In Grasso F., Dennis L. (Eds.), AISB 2018: Cybernetic Serendipity Reimagined, Liverpool, UK. Taylor & Francis. [10] Pangaro, Paul. "Questions for Conversation Theory Or Conversation Theory in One Hour." Kybernetes 46.9 (2017): 1578-87. ProQuest. Web. 13 Oct. 2023. [11] Tilak, S., & Glassman, M. (2022). Gordon Pask’s second-order cybernetics and Lev Vygotsky’s cultural historical theory: Understanding the role of the internet in developing human thinking. Theory & Psychology, 32(6), 888–914. https://doi.org/10.1177/09593543221123281


[12] Johnson, T. L., & Johnson, N. N. (n.d.). Police facial recognition technology can’t tell black people apart. Scientific American. Retrieved October 13, 2023, from https://www.scientificamerican.com/article/police-facial-recognition-technology-cant-tell-black-people-apart/ [13] Turk, V. (2023, October 10). How AI reduces the world to stereotypes. Rest of World. https://restofworld.org/2023/ai-image-stereotypes/ [14] Christian, B. (2022, June 21). How a Google employee fell for the Eliza effect. Atlantic Monthly (Boston, Mass.: 1993). https://www.theatlantic.com/ideas/archive/2022/06/google-lamda-chatbot-sentient-ai/661322/[[category:]]