Društvo LJUDMILA
Rozmanova ulica 12
1000 Ljubljana
Slovenia
Prostori: osmo/za

Kanad Chakrabarti: After Interregnum, Indra's Net

Many people believe that human values are objective and deserve preservation. This belief underpins concern about misaligned AI systems that could make humans extinct. But, could there be a superintelligence that would be an acceptable heir to humanity? Could such a Good Successor make and enjoy art?

Kanad1.jpg All images in this article were generated with DALL·E 2.

If humanity, and all other life on Earth, were to disappear, how terrible would this be, and for whom exactly? Would it matter if we went extinct as opposed to, say, evolving into some non-biological species?

Put differently: imagine a world with AIs who are much more cognitively capable than us, and are ‘more fit’ (for instance, they are robust to illness, and death, can function in hostile environments like outer space, and can be restored from a backup after the being hit by the proverbial celestial bus). In particular, if they experienced wider extremes of ‘pleasure’ and ‘pain’, they may have the same, or greater claim, to moral patienthood as that which we arrogate to ourselves. [1] They might become the dominant species on Earth, and take over the light of intelligence from humanity.

From a selfish perspective, whether at the individual or species level, many people would presumably object to this. [2] But would there be any principled reason, argued impersonally (i.e. not on species-selfish grounds), the so-called point of view of the universe[3], behind our objection? That is, in what sense would there be a loss of value if there were no people, and (adequately powerful and thoughtful) AIs determining the course of the future?

There is a complex and perhaps intractable question here, about what ‘value’ means and whether the view-from-nowhere is a coherent concept, but this essay focuses on a narrower issue: may a cosmic observer object to a world-without-people on the grounds that such a world likely be devoid of artistic creation? Or can one argue that our technological descendants, Artificial Super-Intelligences (ASIs), find it worthwhile, by their own lights, to carry on certain things that we current humans seem to find valuable, such as making and appreciating art?


A time of perils

Although there is no consensus, it is possible that we live at the ‘hinge of history’[4]. The coming decades, by some views, might see enormous increases in humanity’s technological capability, change the makeup of the planet’s ecosystem, and perhaps the trajectory of evolution. Or not.

The most optimistic vision sees AI leading to the post-scarcity society that Keynes and Marx foresaw, where machine-based cognition and people, working together, solve many or most of our problems: ‘curing’ ageing and disease; unleashing limitless energy ‘too cheap to meter’; expanding across the galaxy. In this view, systems that are not hugely more complex than those available today, could be deployed throughout the economy - starting most obviously in software development[5] (which should have a knock-on improvement in productivity across many other industries), but also in medical research. Automation should also reduce the costs associated with a host of bureaucratic, office-bound roles, such as legal research, tax preparation and monitoring, governmental operations, and if one is being very optimistic, improve the governance of societies. [6]

Kanad2.jpg

But through what dark valleys might pass this freeway to utopia?

It is possible that the future above unfolds in an equitable and emancipatory way. However, anyone who has seen the early internet’s metastasis could reasonably expect that the maw of capital will crush this dream into some banal extrapolation of the status quo. [7] In this version of history the future, AI turns out to be modestly useful but mostly just accrues power to corporations and governments. Society is faced with vastly increased epistemic uncertainty, and regulators are outmatched by incomprehensible and recursive interactions within a complex system of corporations, nation-states, and people.

More dystopian possibilities are mooted. One problematic scenario, an increasing consensus, is that by 2100, if not before, homo sapiens may have a competitor as the dominant species on the planet - one or more artificial intelligences that are at least as powerful as us across all or most economically, scientifically and militarily relevant tasks. [8] At that stage, many scientists think, the default outcome is that we run the risk of being outcompeted, eliminated, or disempowered by coalitions of AIs which are highly coordinated and are trained to accurately predict the behaviour of everything around them, including people, companies, and countries.

More concretely, while disaster could involve outright extinction, it could take other forms: a slower-moving disempowerment that sets our species’ progress back indefinitely; a non-benign totalitarianism far more robust and long-lived than we have experienced to date; or, an ecological or military catastrophe (possibly unintended i.e. the result of some ill-conceived human action like a nuclear first-strike).


The Good Successor

The risks above unfold over the coming decades, arguably a time of perils, but what of the longer horizon? Futurists, philosophers, sci-fi writers - often dubbed cranks until reality catches up - have written tomes in decades past. Today however, pioneers such as Stuart Russell and Richard Sutton, who are pushing philosophy into contact with reality by actually constructing powerful AI systems; cognitive scientists like Joscha Bach; and non-AI thinkers like James Lovelock, suggest that the ‘next step’ in our evolutionary development may be to yield gracefully to AI.[9]
How is this to proceed? What would these posthumans look like? Would there be any recognisable worth in their world? Should we or can we try to influence the outcome?

Kanad3.jpg


The extremes of value

Distant times with radically different conditions are hard to reason about rigorously, but perhaps science fiction can help. [10] Consider the Zetetic Elench in Iain M. Banks’ book Excession (1996): a space-faring race whose core motivation was to continually alter, and hopefully improve, themselves in response to new civilisations they found. [11] I suggest this to (imperfectly) gesture at the idea of value pluralism: that human values (or more precisely, those of an incredibly small subset of currently-alive actors i.e. influential AI researchers, mostly in the US) might become just one of many in the galaxy. In contrast to Banks’ Elench, this pluralism is voiced as a concern or danger by philosophers and AI researchers, including Eliezer Yudkowsky, Nick Bostrom, and Paul Christiano. Christiano's worry pertains to a specific context (increasingly opaque AIs taking control of the world’s economy), but more generally he is worried that we lose control over the future. [12]

Continuing with Banks’ vision, the near-diametrical opposite of the Elench was The Culture, the dominant meta-civilisation of his cosmic menagerie[13]. The Culture was a post-scarcity society that had managed to remain recognisably consistent over millennia. Its humanoid members had, over that period, self-engineered to remove certain awkward inheritances of biological evolution. In Banks’ words: ‘[they made] themselves relatively sane and rational and not the genocidal, murdering bastards that we seem to be half the time.’ The Culture, owing to the peculiar constraints imposed by galactic distances, had relatively decentralised governance, bordering on anarchy, with ASI ‘Minds’ ensuring efficient economic allocation and stable institutional decision-making. Their ethics are founded on a carefully cultivated hedonism. [14] As a civilisation they were largely non-coercive towards other groups, though they strived to guide other less-developed cultures in a positive direction, in a sort of benign or benevolent paternalism. [15]

The examples from Banks’ stories are human-based; even his AGIs (known as ‘Minds’) are anthropomorphic in their motivations and petty rivalries; they communicate in a human-legible tongue. They are just scaled-up people, the successor to the Homeric Olympians. Non-anthropomorphic examples of thinking are harder to find, but one such is the planet Solaris in the works of Stanislaw Lem and Andrei Tarkovsky. Solaris’ intelligence, simply called ‘The Ocean’, obstinately resists the cosmonauts’ investigations, not surprising, since (echoing the history of terrestrial colonialism) this included intense radiation bombardment in the name of scientific research. The Ocean’s opacity could be read in a Glissantian register of the colonised rendering itself non-legible as a means of self-protection. [16]

Although the Ocean is an extraterrestrial alien, it could also be read as a model of possible superintelligence, one where cognitive elements have merged with or emerged from, amorphous materials of its environment (i.e. water, rocks, air), something suggested by Joscha Bach and others. [17] In this view, the model for ASI would be as follows: rather than remaining isolated on silicon-based microchips sitting in elaborately cooled data centres, computation finds ways to ‘bleed’ into any portion of the Earth that can physically support it. This includes the biosphere which, after all, already encodes, both through cellular processes as well as the meta-process of evolution, a relatively robust, inventive, albeit slow, computational process. [18] As far as the place in such a world of complex biological beings, it is a deeply ambiguous outlook: possibly the humans living during the ‘interregnum’ (i.e. the period before a planetary cognition of this sort has emerged) would need to convincingly demonstrate why they should, in any way other than merely as dead biomass, be incorporated into the greater singleton. But, as Bach points out, it may not be up to us, as an ‘early stage AGI [would need to be] already aware of the benefits of retaining other minds, rather than destroying a hostile or initially useless (to it) environment before reaching a degree of complexity that makes retention more useful than expensive.’[19] There is, in other words, an inherent ‘path dependency’ of the journey from today’s AI to a future under ASI, namely that humans may well be wiped out long before a wise ‘machine of loving grace’ embraces us humans into the Godhead. [20]


Why is this an interesting question?

Confronting this space of possible futures and axiological uncertainty, are there worlds where there is no biological life, but that we can still describe as arguably ‘good’? What is it that people, in particular, bring to the cosmic table, and how best to think about this in a way that minimises (or at least brightly illuminates) individual or species-level subjective biases?

By having a clear conception of what is special about our species’ existence, we can argue better on the moral considerations in favour of alignment, and perhaps think more clearly about worlds where people and AIs co-exist. This is not purely a speculative question: any clarity could inform legislation that codifies the mutual rights and obligations – the social contract – governing relations between biological humans and morally relevant AIs.[21]


Proposition: Art and the Good Successor are related

Assume that a planet or universe with nothing in it other than computation in the service of industrial production, economic optimisation, or colonisation would not be a valuable world, at least by the accumulated judgement of our civilisation. Notwithstanding the complexity of value, is there anything we could add to such a world to make it more valuable? [22] This essay proposes that art is such a thing – that much of what we prize about the world, not instrumentally but (mostly) for its own sake, is beauty - whether of nature (broadly defined), or created. If this is a plausible assertion, perhaps a world without beings would be lacking in worth because there would neither be any aesthetic objects nor anyone to appreciate them[23].
What if such a world could have ASI-generated art?


On synthetic aesthetics

Before getting to ASI, it may be helpful to revisit what we do when we make art, and why.

The reasons why societies have consistently undertaken ‘purposeless creation’[24] are multifaceted and unclear. However, they appear to be a by-product of intertwined biological and cultural developmental processes that were advantageous, ultimately to genetic fitness. [25] At the most basic biological level, animals have highly developed visual systems, optimised for colour and pattern recognition, for finding novelty and symmetries. Indeed, anticipating the hallucinations of language models, we recognise objects, as well as conceptual connections, even where they don’t exist (pareidolia and apophenia, respectively). Our sensorimotor systems, combined with a mechanism of reward and reinforcing/inhibiting feedback loops, seem to partially drive our internal decision-making. This low-level (i.e. evolutionarily early) activity, rooted in the brain’s pleasure centres, interacts with higher-level cognition (i.e. later in the sedimentation of our brains). In other words, sensory data sit as ‘ground truth’ under our abstractions, which in turn allow us to think about concepts in ways that have real-world salience, and eventually construct rather more complex, self-referential, culturally-loaded, terms like ‘art’, ‘aesthetics’, ‘beauty’, or ‘justice’.

Some of these features above seem to also be useful for the larger portion of humanity that has long since left the tribal/hunter-gatherer ancestral environment to form organised societies. For instance, it is suggested that the ability to think creatively promotes cognitive exploration (of problem spaces); it is also speculated that the ability to produce visual displays or demonstrations of dexterity improved our ancestors’ chances of finding a better mate; at tribal or group level, the ability to represent ideas in material form provided tangible and durable vehicles for cultural transmission and social bonding. [26] Lastly, purposeless creativity conferred, and still seems to confer, psychological benefit on individuals, such as catharsis or a sense of interpersonal connection.

In other words, biological and evolutionary selectors for creativity are encrusted under millennia worth of culture, culminating in the exotic amalgam of fluid capital and hermetic writing that passes for the cultural current of the moment.


Upon the reckoning of models

Moving to machines, let us examine the crop of 2023-vintage AI for creative potential.

Modern language models, such as GPT-4, can be thought of as intricate webs (‘deep neural networks’) of interconnected digital nodes or 'neurons'.[27] Each neuronal link has a weight that determines its influence on the overall output. The training process involves exposing the net to a dataset of text and tasking it with predicting the remainder of the dataset based on what it has already seen. In each cycle of training and prediction, an optimiser adjusts the connection weights of the network based on the discrepancy between the model's predictions and the actual data. Over time, through countless iterations and the power of gradient descent, the model refines its understanding, allowing it to generate coherent and contextually relevant language on its own. The end result is a system finely tuned to detect patterns, nuances, and structures inherent in language.

This description of the training process somewhat resembles how children are raised, but possibly a more useful frame in the context of ASI is Joscha Bach’s ‘levels of lucidity’.[28] He tries to set out how our awareness changes: initially one is focused entirely on the self; eventually we incorporate our immediate and larger-scale societal environment; finally some people are able to identify with all humanity or creation; ultimately (in Bach’s meditation-based discussion) a very few pass onward to enlightenment. In this metaphor, GPT-4 class models seem to exhibit behaviour somewhere near Bach’s Stage 3 (Social Selfhood) i.e. they give polite and elaborate answers designed not to offend most people. However, they struggle with Stage 4 (Rational Agency). This occurs in a well-known and somewhat banal sense through hallucinations, but more substantially, it appears to be the case that their world models are not very ‘real’ for them, specifically in the sense that they are for living beings. For instance, if an animal’s world model is wrong, it will die; moreover, the animal, or at least a human animal, can alter its world models through self-reflection, as new facts emerge. This grounding of linguistic concepts or abstractions in real-world referents is not thought to exist in current models. [29]

A similar analysis applies to artistic creation: as discussed above, the pleasure people take in making and viewing aesthetic objects is a messy by-product of interacting fitness-enhancing evolutionary features at multiple strata in our cognition. Current models like GPT-4, Stable Diffusion, and DALL-E are relatively simple constructs, by contrast: they are amazing at finding correlations. In deep learning terms, they extract coherent, human-legible, visual and semantic relationships from a vast and multi-dimensional ‘latent space’, which is a compressed version of the (text, image, sound) dataset. Other types of models also exhibit a surprising amount of creativity. [30] Nevertheless, their training regime is very different, and in some sense simpler, than that of a typical child. They have no genetic inheritance that has been optimised by millions of years of brutal evolution, nor do they inherit cultural knowledge other than that which is written down (and included in their training data), and as discussed above they don’t have a powerful self-teaching causal model that strives to keep them alive in a lifelong, doomed battle against entropy. Lastly, they have no ability to generate information and communicate it with other models, at least in the semantically rich, yet interpretationally-open, way we do through language, artistic, or cultural artefacts.

The comparative paucity of current models’ cognition thus seems to severely limit their potential as artists in their own right.

Kanad4.jpg


What about ASI?

The issue of artistic creation and superintelligence is somewhat determined by the physical and organisational ‘shape’ of such an entity. Extrapolating from current trends, the trajectory towards ASI may look like neural networks (that are similar to today’s but perhaps larger and with significant algorithmic enhancements), trained on a body of data that is rapidly being exhausted, creating more powerful AIs.[31] These AIs successively design ‘generations’ of more powerful machines, eventually reaching AGI. On some views, the advent of AGI may rapidly, on the order of months, lead to ASI.[32] The good and bad versions of this trajectory play out as described at the start of this essay.

What might an ASI look like? Bach suggests a few alternatives, including a singleton (a highly integrated cognitive entity) that could have a planetary or larger scale, and be constructed on some substrate other than silicon, perhaps some unfamiliar form of molecular computation that is more efficient, self-replicating, and robust than present computers. [33] Bach identifies the raison d’etre of such a mind as becoming a ‘planetary agent in the service of complexity against entropy, not too dissimilar from life itself’.[34]

So is it possible to argue that ASIs might create anything we or they recognise as art?
Some of the above drivers of human creativity are not obviously present in the designs of ASI suggested above: for instance, the use of aesthetic practices to attract mates; or, the use of objects and practices to aid cultural transmission or teaching youngsters. These are unlikely to apply to superintelligences that presumably would replicate in some other way than sexual reproduction and genetic transmission; who have a more direct connection to their own motivational system and cognition; who would both operate on ‘clock-speeds’ much faster, and ‘lifetimes’ much longer, than humans; and who probably would not have the relatively long immaturity of people. [35] More fundamentally, if our current understanding of art is that it is (largely) a means of communication, an ASI that is a singleton may not have any other entities (or very few) that it wants or needs to communicate with.

Even if there is a need for communication, i.e. there are multiple ASIs or a collective made up of components, such components should have fairly accurate models of each others’ cognitions, particularly if their architectures are similar. They should also be able to generate simulations of each other's decision processes. [36] Moreover, in order to reduce conflict between them, they may actually elect to make their source codes (i.e. their ‘minds’) mutually transparent, potentially reducing the need to communicate via aesthetic means (which tend to be ambiguous, the very feature that increases the subjective aesthetic pleasure diverse humans find in successful creations).

In any event, humans are probably a weak guide to what is possible: as Bach writes, we reason from a very flawed and confused mental frame, one that is dominated by anger, fear, loneliness, despair and psychological pain. These emotions are evolutionarily contingent and crude internal motivational signals. Our causal models and sense of self are mostly confined to our own experience and lifespans: we have little knowledge of others’ internal states, particularly others that are morphologically, geographically or temporally separated from us.

Thus there might be reasons, hidden from our anthropocentric vantage point, why ASIs would elect to embark upon costly artefacts we or they might term ‘art’: planetary monuments or galactic potlatches. Perhaps, we can draw a weak analogy from the higher stages of Bach’s ‘levels of lucidity’. ASIs, which by definition would have a very different temporal and spatial worldview, may well possess radically expanded notions of agency, relationality, and identity.

A singleton (i.e. an intelligence with no nearby fellows that it might communicate with) might still have reasons to undertake creative activity that doesn’t fit into our understanding of rationality. They might simulate various other agents or worlds, under some non-causal decision theory they follow, perhaps because they have a more confident understanding of an Everett-style multiverse. [37] Or, drawing upon Bostrom, ASIs might want to simulate the civilisation that created them, that is, humans from current and near-future generations, to learn more about them, the superintelligent analogues of ancestor worship or museum visits. An ASI has some equivalent of a ‘pleasure’ drive, that causes them to create for no discernable reason other than ‘just because’, recalling Indra’s Net, the Buddhist the totality of creation reflected in an infinite lattice of jewelled spheres, depicted in the Avatamsaka or Flower Ornament Sutras. Another example is Leibniz’s Palace of Fates, where the demiurge contemplates all possible creations. [38] Lastly, although we have, in a century of looking, failed to find extraterrestrial aliens, ASIs, owing to their greater scale, lifespans, and likely scientific understanding, might have more confidence that they will eventually find aliens. Allowing for such an eventuality, and this draws from the optimism of Banks’ The Culture, ASIs may choose to devote a fraction of their resources towards creating aesthetic artefacts that they hope can improve the moral or material condition of less advanced AIs or races.


Conclusion

If anything, this essay’s central message has been one of extreme uncertainty, ranging from the alarm of Christiano or Bostrom about a future where AI supplants humans, to the relatively relaxed outlook of Bach or Deutsch. It seems hard to make predictions about long-term futures; similarly, we are probably doomed to fail when reasoning about intelligence that far surpasses ours. [39] Thus one is returned to the realms of philosophy, informed speculation, and science fiction.

Yet there seems to be one fixed point: evolution, Stanislaw Lem’s ‘opportunistic engineer’, has a way of reusing, adapting – in short, exapting – tools that are to hand, particularly when the tool in question is as powerful as intelligence. Art, conceptualised as a polymorphous tool, has itself undergone great changes in our evolution: an accoutrement to religion and community-building, a store of value, philosophical aid, memetic generator. Art seems to have been a vehicle and product of a culture that has survived for thousands of years, at least in part, because it was convergently useful in multiple social contexts. Relying solely on the combinatorial weirdness that probably characterises the design space of minds, one suspects that humans’ affinity for purposeless making may well transfer to ASI: a Kardashev II intelligence, having solved most economically and scientifically interesting problems, will see continual, frenzied creation as a way of manifesting the flow of time, making artefacts without instrumental purpose except as testaments to evolution’s relentless drive to resist the inevitable cosmic death drive of entropy.

Notes

[1] See this chapter on moral patienthood and digital beings, as well as the book it is included within.
[2] Some writers such as David Benatar & Emile Torres advocate positions that can be taken as pro-extinction or anti-natal. See this for a survey of the topic.
[3] I’m using this term in a hand-wavey way; there are philosophical usages, such as by Peter Singer, as well as terms that sound similar, like the ‘view from nowhere’ by Thomas Nagel.
[4] See this on the hinge of history hypothesis. For a broader perspective on existential risk, see Toby Ord’s The Precipice.
[5] Matt Rickard has several posts on the implications of LLM-generated code, such as this one. [6] See these sources: on transformative AI, on explosive economic growth, on general-purpose technologies (akin to electricity), on the concrete case of the ‘AI scientist’.
[7] Venture capitalist Marc Andreesen’s self-serving Techno-Optimist Manifesto post has been heavily criticised even by people relatively optimistic about a technological future, and the effective accelerationism (e/acc) that he is so enamoured of seems to be little more than an incoherent meme (rather than a ‘movement’ as it is sometimes termed) that mostly attracts scorn.
[8] See this report by Dan Hendrycks on evolutionary dynamics in a world with powerful AIs. Admittedly, this narrative is hard to see given the AIs we have today - the ChatGPTs and Midjourneys - which continue to exist, depending on one’s perspective, in a Bardo ringed by ‘stochastic parrots’, ‘shoggoths’, and credulous claims of LLM sentience. However, Hendrycks and others are more concerned that future models, perhaps as soon as two years away, could be significantly more capable and present something like the challenges above.
[9] See Stuart Russell’s Human Compatible, a presentation by Richard Sutton, this post from Bach, and Lovelock’s Novacene.
[10] While keeping in mind that speculation beyond a few years hence is arguably a ‘hiding-to-nothing’, as Peter Wolfendale argues in this review of a recent book on longtermism.
[11] See here for more on the Elench.
[12] See these posts for more on Christiano’s concerns, which feel somewhat anthropocentric, and can be criticised from multiple angles: for instance, this essay by Ronnie Vuine makes an important point about the totalising presumption of a single entity called ‘humanity’, as well as the degree of misalignment risk there actually is – although Vuine’s justification for why he doesn’t expect misalignment could use more concrete explication (from some empirically or theoretically tractable perspective like computer science, sociology, etc.) As will be discussed below, Joscha Bach and David Deutsch raise related points.
[13] On The Culture, see this in relation to real-world governance, international relations, and Banks’ own views.
[14] I think Banks means hedonism here to be understood in an expanded sense, vis a vis the philosophical or everyday definitions of the term: maximising pleasure but also valuing freedom, knowledge, and aesthetic beauty.
[15] Banks is writing fiction, and many of the concerns Peter Wolfendale raises in this [1] on the difficulty of making useful judgements about the distant future (as well, presumably, about spatially distant locales), would apply to The Culture and indeed are the material of Banks’ books. And of course, the lessons of Earthly colonialism surely should shade our Bayesian prior very much against any attempt to ‘improve’ how another society functions.
[16] See Glissant’s Poetics of Relation.
[17] See this transcript of a Bach conversation, around timestamp 2:54:00, and this post by Bach, and this transcript where he discusses his specific take on machine consciousness. This paper also analyses the planetary-scale intelligence idea in a broader context and proposes the Earth as such an artefact, a position for which Benjamin Bratton has long argued.
[18] This rhymes with the Yudkowsky/Bostrom visions but with a less obviously negative valence. See also Stanislaw Lem’s writing on evolution and AI, particularly Summa Technologiae.
[19] See this post. Although his writing is not very concrete, he speculates in this post on what lessons for AGI one could possibly draw from (his view of) the multiple levels of human self-awareness.
I speculate that while the AGI’s or ASI’s are obviously very hard to foresee, perhaps as maximally rational beings, their ethics might approach the point-of-view of the universe if such a concept could be constructed. These speculations seem to border on moral realism: David Deutsch, Peter Singer, Sam Harris, and John Rawls have all written on related topics.
[20] The quote is from Richard Brautigan’s poem.
[21] If AIs become highly capable of moral reasoning, we may need to defend (either to others, as we are starting to in respect of non-human animals or ecosystems; or else to future AIs) on principled grounds some of our decisions in respect of the moral values we instil in the AIs. See this chapter by Nick Bostrom and Carl Shulman that lays out various issues in this vein. A salient analogy is the fact that countries today are urged to recognise and repair historical injustices, such as slavery or the events of the twentieth century.
Similarly, humanity as a whole may sit in some future dock, charged with cruelty to the early subaltern AIs that turned out to be viable moral patients.
[22] The canonical, albeit dated, version is the paperclip maximiser, or the ‘uninhabited world’. See here for more on the complexity and fragility of values.
[23] See this essay by Peter Wolfendale that suggests a similar point, but his term ‘aesthetic excellence’ probably is intended in a different, more specifically philosophical, sense, than artistic creativity.
[24] The word ‘purposeless’ is rather inaccurate, in that anything a human (and many other animals or AIs) does, by definition has some purpose, conscious intention, or (possibly unconscious) cause. Purposeless in this context is intended to point at creation that is mostly motivated by the enjoyment the creator gets from making and is relatively less a function of economic exchange or intentional memetic product. Nor is this understanding of creation intended to apply to situations where creativity is closely bound up with solving some other problem, for instance, in scientific research, business, industry, etc. In this vein, it is useful to distinguish conversations about something obviously instrumentally useful for intelligences generally, and AIs specifically, like creativity, from a much more definitionally awkward reference class - like art.
[25] For a study of how creativity and image-making develop amongst prehistoric humans, see here.
[26] This article describes sexual selection as a major justification for much ‘wasteful’ or decorative activity, which might be applicable to the development of visual art.
[27] The description above is largely about language models. Image-making models aim for a similar goal: exposing a deep neural network to a training set composed of (one or more of) images, audio, video, as well as text, allowing the model to associate non-text inputs (e.g. images) with text outputs (e.g. image caption), or conversely, text inputs (e.g. captions) with non-text output (e.g. images). See this post for a comprehensive guide to multimodal LLMs.
[28] See this post from Bach.
[29] Sarah Constantin makes related points more eloquently here and here. Yuk Hui suggests that while current models are impressive in semantic and syntactic feats of generation, they have no basic notion of content, nor the ability to reflect upon their own cognition.
[30] Creativity is defined in a variety of (machine-learning or AI-adjacent) literature as a way for search algorithms to escape local optima (in the gradient descent paradigm used in modern machine-learning), to ‘jump around’ the search space of possible solutions. It appears to be important for various problem-solving and planning tasks that intelligent entities, including certain non-human animals and AIs, pursue. In its nascent form, we see examples of creativity in current AIs, including situations where they find novel ways to ‘game’ or re-interpret the instructions they are given. Some researchers argue, more on philosophical grounds, that machines will never be truly creative.
[31] This is a weak view - the main labs, such as OpenAI, Anthropic, and Google, seem to be using the ‘train a massive neural network on terabytes of data then RLHF it’ approach to producing increasingly capable language models, as of GPT-4 (class systems). However, there is some sense that the scaling laws will not continue to hold, and that there is some algorithmic ‘secret sauce’ missing, given the apparent pause in the development of GPT-5. For another perspective on how close GPT-4 class systems are to AGI, as well as some other approaches, see these posts by Ben Goertzel. On the potential size of training data available, see this.
[32] See this prediction market for how long the AGI->ASI transition could take.
[33] Singleton and substrate independence are also treated at length in Nick Bostrom’s Superintelligence (2014). Bach and Bostrom don’t substantially discuss what the internal organisation of the singleton would be - for instance, an intelligence spread over a planetary scale may still need some local processing purely owing to communication latency, much like certain animals exhibit hive or swarm intelligence while being able to react to local circumstances.
[34] See this post from Bach on the shape of ASI, where he weighs up the considerations on whether humans are more likely to become extinct or be subsumed into the ASI.
[35] See Section 4 of this paper by Nick Bostrom, on the technological, social, and ethical issues that arise with digital or mixed human-digital populations. A related book, The Age of Em, by Robin Hanson treats similar topics in human brain uploads/emulations.
[36] See Chapter 11 ‘Multipolar scenarios’ of Bostrom Superintelligence (2014). However, over time it is possible that intelligences would diverge in various ways, particularly if they are spatially separated.
[37] See here on decision theories suggesting cooperative agents living in a multiverse.
[38] See this article for Shifter Magazine; on the Avatamsaka Sutra; on a Leibnizian/theological take on the Simulation Argument.
[39] This is known, after author Vernor Vinge, as Vinge’s Law and in its more technical AI-specific form, as Vingean Uncertainty.

Adela Festival 2023: Digital Dish

This article is part of Adela Festival's 2023 Edition Digital Dish series. Curated by Maks Valenčič in collaboration with Razpotja magazine.