Ted Chiang and What it is like to be an A.I.
One of the best writers of speculative fiction may be wrong about A.I. making art, but his critics get wrong what Chiang is actually right about.

Here’s a story:
Sometime in the not-too-distant future, something not really like a machine but not really like a human either will do what we think of as “scientific research.” Let’s call this something a “metahuman.” The pace and depth of metahuman research and its outcomes will outstrip the capacities of human scientific research, and eventually metahumans will have their own scientific communities and networks to which contributions by human scientists will be irrelevant. Science as we currently know it will advance beyond what even the brightest human scientific minds will be capable of understanding, leaving “human science” the task of merely trying to understand something ultimately unknowable to it, namely metahuman science, the ideas it generates and the things it creates. If metahuman science will be real science, understood as an advancing knowledge of, and intervening in, the world as it really is, then human science, understood as the exercise of human intelligence in the understanding of metahuman science, will be something like “hermeneutics,” a search for what the world means in the codes and artifacts, the ideas and things, produced by metahumans.
Versions of this story have become ubiquitous in the past few years as A.I. has advanced beyond its game-playing applications like AlphaGo and into wide public adoption with ChatGPT. Speculation about what a “superintelligence” will do and be capable of is no longer just the province of science fiction; it’s now the stuff of politics, and policy.
But this particular version of the story was told nearly 25 years ago, in the scientific journal Nature, under the title “Catching Crumbs from the Table,” and its author is Ted Chiang, one of the most celebrated writers of speculative fiction working today. Chiang’s original title for his story was “The Evolution of Human Science,” and Chiang’s attitude towards the idea of a superintelligence surpassing the capacities of human intelligence in that story comes across as one of sanguine resignation. The story ends on the following note: “We need not be intimidated by the accomplishments of metahuman science. We should always remember that the technologies that made metahumans possible were originally invented by humans, and they were no smarter than we.”
Of late, however, Chiang’s attitude has changed. The humans that are advancing A.I. may be “no smarter than we,” but Chiang believes they are compromised by capitalist incentives, and so what they are building, far from the promise of a new Utopia of intelligence-on-tap, is showing up as yet another chapter in the epic saga of technology-driven exploitation and impoverishment that sticks to the history of the modern era like gum to a shoe (or a boot heel, as most critics of this ilk would have it).
This is Chiang’s current position at least. What’s disappointing is not Chiang’s cliched critique of capitalism, but his reluctance to pursue the more intriguing philosophical, indeed speculative lines of inquiry that so much of his fiction is known to do. What’s even more disappointing is that critics of Chiang’s position largely accept the contours of his critique — they are no fans of capitalist exploitation either — but completely miss the implications of his early contributions to this conversation and what it means for human science and, important for our purposes here, human art. In other words, there is a divide between Ted Chiang the A.I. critic and Ted Chiang the creative writer. To engage the former on his own terms will mean getting wrong what the latter is ultimately right about.
Chiang and his critics
To understand why, we first need some sense of Chiang the critic and his interlocutors: In a series of articles, foremost in The New Yorker, Chiang has analogized AI to the perpetually PR-challenged McKinsey & Co., whose list of notable contributions to big business practices include the mass layoff and opioid marketing, but whose real service to its clients, according to Chiang, is accountability laundering, something he believes AI and its boosters unwittingly promote; he has also presumed to explain to the layman “Why Computers Won’t Make Themselves Smarter,” in which he trashes the idea of machines being any good at what he calls “self-recursive improvement,” something humans are good at, especially when networked together in a “civilization”;1 and once ChatGPT was on the scene, Chiang saw its uses and abuses of language as leading to the kind of degradation one gets with the repeated compression and compiling of image files (“blurry JPEGs” as Chiang describes them), the presumption being that these are little desired and of little use to human creativity, let alone labor.
All of these essays have been warmly received by that spectrum of the commentariat that stretches between the academically inclined, who are quick to denounce “platform” and “surveillance” capitalism, and the ranks of workers in the media, advertising and publishing industries who only take their heads out of the sand long enough to read a New Yorker article that confirms their priors. That reception did get a bit chillier though when Chiang published his most recent attack on AI, which did away with the clever analogies (but not his anti-capitalist bent) and went right for the lifeless jugular of AI’s most fraught promise: “Why A.I. Isn’t Going to Make Art.”
The basic contours of that attack are as follows:
Art making requires an artist to make a lot of decisions: A.I. like ChatGPT and DALL-E currently speed up that decision-making work by using averages and probabilities to substitute for decisions a human might make, and average and probable decisions don’t make for good art.
Labor-saving, in the form of paraphrase and interpolation, is anathema to what makes art what it is; for example (and obviously not one that Chiang offers), reading my paraphrase of Chiang’s “The Evolution of Human Science” does not give you access to that particular work of art; it might tell you something about it, but it’s not the thing itself.
Labor-saving is also anathema to basic communication and expression, which are necessary but not sufficient for making art. These are skills that are enhanced through training and practice. Though something like ChatGPT might make communication and expression faster and so easier for many, it does not extend or enhance those skills. As a kind of “turbo-charged autocomplete” (Chiang’s term), it degrades them.
Skills alone are not intelligence. Getting great at one thing, playing the game Go for example, which A.I. now does incredibly well, does not equate to becoming more intelligent.
For something to be art, it has to be intended as such by someone.
Very quickly, the internet lit up with explainers of why Chiang was wrong. Some writers, such as Jesse Damiani at Reality Studies, claim that Chiang misses how art — what it is, both historically and into some undefined future — depends upon context, so to weigh in so emphatically on A.I.’s incapacity to make art today cuts off what it could do, and how its products might be regarded, at some later date.
Damiani also points out that artists have been using A.I. as a tool in one form or another for decades. Though he concedes that this doesn’t address Chiang’s main argument (that A.I. independent of human decision-making will never make art), because artists have used A.I. as a tool suggests that at some point it could become a rich medium like painting or photography, technologies of creative activity whose products gain recognition by having been intended as such by an artist.
It’s worth pointing out that Chiang had already lodged his skepticism of this idea in an earlier essay. In answering a question about whether LLMs will “help humans with the creation of original writing,” Chiang compares how writers use ChatGPT to the way artists used photocopiers for a time in the 1960s and 70s to make art. The results may have been interesting (on the whole they weren’t). But as Chaing rightly puts it, photocopiers never became an “essential tool” for artists, because — and here I’m making an argument for Chaing that he doesn’t make himself, at least not directly — the technology never afforded artists the necessary degrees of freedom for it to become a medium. Chiang’s error here was to confuse the technology, the photocopier, with what we might call the “techné” that the technology affords.
To illustrate this point, we could look at an artist like Hollis Frampton, who experimented with photocopiers in his practice not because the big boxy machine was something inherently interesting,2 but because it was another method of automatic-yet-synthetic image making through and by which he could extend his ideas about photography and film. Frampton regarded film — the idea of it, what one could do with it as a medium — as a “metaphor for human consciousness.” This is what drove his work. Were he alive today, Frampton would surely be fascinated by how ChatGPT and its derivatives are bringing the question of human consciousness to the fore in new representational ways. What does consciousness look like? How do we represent it? Perhaps the problem Chiang and others have with ChatGPT is that it’s not metaphorical enough.3

Like Damiani, Matteo Wong, writing in The Atlantic, also takes Chiang to task for his ahistoricism, his dim view of A.I.’s as yet unwritten future prospects, and its labor-saving promises. What Wong does that Damiani doesn’t is open the door of comparative intelligence. Wong rightly points out that there are things A.I. can do that humans can’t, such as find patterns in huge swaths of data and then predict potential extensions of that data. One salient example of this is AlphaFold, a Google DeepMind A.I. project that “discovered” the entire range of possible organic protein structures by accurately predicting how all the possible ways their molecular bonds would fold. As if overnight, the library of known proteins went from the tens to the millions, a radical leap in our knowledge about the biological world and our ability to intervene in it.
But, with the door of comparative intelligence open, Wong fails to walk through it, leaving the question of what might distinguish animal from machine intelligence largely unanswered. I think the reason for this is that Wong, Damiani, and even Chiang-the-critic suffer from what we could call the “anthropic bias” in discussions of A.I.: In any conversation or question about what A.I. can do, what its merits might be, this bias tends to frame that conversation or question in terms of the inevitable “What will this do for us?” It’s a natural question to ask of any new product or process. But the categorical difference here, to which Wong alludes, is that we’re on the cusp of confronting, if not creating, a new kind of intelligence, one that functions in fundamentally different ways from animal let alone a more specifically human intelligence.
What is it like to be an ant?
One difficulty of overcoming the anthropic bias is that it’s something of an epistemic limit, a horizon beyond which, or over which, we don’t have the capacity to see or travel. Fifty years ago, the philosopher Thomas Nagel put this problem in more terrestrial terms when he asked “What Is It Like to Be a Bat?” Nagel’s conclusion was that we do not have the capacity to know what it is like to have the subjective experiences of another being, in other words to be a consciousness other than our own. There is something irreducibly first-person and subjective in consciousness, which is unknowable to others, is inaccessible to second- or third-person descriptions of it.
Nagel did close his paper with a “speculative proposal” however. Would it be possible to develop a new “phenomenology” that could “describe, at least in part, the subjective character of experiences in a form comprehensible to beings incapable of having those experiences”? For Nagel, this new phenomenology would be a type of philosophical language or formalism, a new representational or sign system with its own internal coherence, a “metaphor for consciousness,” say, and what it would do would be to overcome the fact that at present (this was 1974, but it still holds today), “we are completely unequipped to think about the subjective character of experience without relying on the imagination.”
When Nagel was contemplating this, he was not only thinking of non-human beings with a compliment of different sense perceptions and so experiences — i.e. bats — but also human beings whose normal experience of the world is compromised in some way. Nagel’s example is someone who has been blind from birth having access to a description of what sight or even color might be like, a description that involves a new conceptual tool set and not merely what he calls “loose intermodal analogies” that are derivative of our own first-person experiences. Notably, Nagel admits that such a new phenomenology “would not capture everything” and “one would reach a blank wall eventually” and “something would be left out.” He holds onto, or holds out, the idea that there may be something about subjective experience, consciousness, that is irreducible to third-person descriptions of what it is or what it is like.
What I find interesting about Nagel’s speculations, and even of some of the subsequent entries in the philosophical literature that took up Nagel’s arguments, is how the beings under consideration — a bat, someone who might be as blind as one — not only hinge on the subjective experience of vision, they also suggest a being who does not possess the full mindedness of a human. There is a kind of otherness involved in imagining what it would be like to navigate via echolocation and how this would feel and shape one’s sense and understanding of the world. The same could be said for someone who has been lacking sight from birth. There is a gap here that is hard to cross. The result is a deflation of what a human can know about “other minds” and thus about the world.
Now, it is surely coincidental that Nagel’s thinking about what it might be like to be another being, the problem of subjective experience being foreclosed to objective description and thus external knowledge, was published just a couple of years following Boris and Arkady Sturgatsky’s iconic story “Roadside Picnic” (1972). There is much that one can write and say about this work, but for my purposes the salient point is that “Roadside Picnic” marks another moment in this deflationary stance toward human knowledge. What the Strugatskys manage is an imaginative overcoming of the anthropic bias.
“Roadside Picnic” is parable about the less-considered circumstances of alien first contact. The primary conventions in this genre are the epic clash of civilizations (think Enders Game or Starship Troopers), the benign if sometimes misunderstood diplomatic mission (Close Encounters of the Third Kind, The Man Who Fell to Earth), or invasion by way of contagion (Alien, Invasion of the Body Snatchers). What each of these variants share, though, is their high estimation of the place of humans in the story. In all of them, we are worthy of attention, as adversaries, allies, or as a means of survival.
In contrast, the Strugatskys’ “Roadside Picnic” begins with a bit more epistemic humility. The first entry in the brothers’ writing journal that covers the development of the story begins like this: “… A monkey and a tin can. Thirty years after the alien visit, the remains of the junk they left behind are at the center of quests and adventures, investigations and misfortunes.” And here is Ursula Le Guin reviewing the story in 1977:
Roadside Picnic is a “first contact” story with a difference. Aliens have visited the Earth and gone away again, leaving behind them several landing areas (now called the Zones) littered with their refuse. The picnickers have gone; the pack rats, wary but curious, approach the crumpled bits of cellophane, the glittering pull tabs from beer cans, and try to carry them home to their holes.
Here, humans are the monkeys, the pack rats, the ants.4 Chiang, talking about his own story, makes a similar case:
[“The Evolution of Human Science”] was written in response to an idea there was around 2000, when people were talking about the singularity and that we would transcend into something much greater. I was mostly thinking, well, why is everyone so certain they’re going to be the ones to transcend? Maybe transcendence isn’t going to be available to all of us, so what would it be like to live in a world where there are these incomprehensible things going on, and you’re sort of on the sidelines?
Where Chiang-the-critic is concerned with the advances of A.I. in the here and now and the impoverishments they portend for the culture to come, Chiang-the-writer might have subjected this critique to slight adjustment and explored “Why A.I. Isn’t Going to Make Art - For Us.” And why isn’t it? Because whatever “Art” might be for an A.I. superintelligence is not going to be legible to humans, just as the metahuman science in “Catching Crumbs From the Table”/”The Evolution of Human Science” isn’t legible to humans either, and instead becomes the stuff of hermeneutics.
The Hard Problem of Intention
Because Chiang takes language seriously — that what the humans are relegated to doing in “The Evolution of Human Science” is “hermeneutics” or something like it, and not some other thing — is worth our attention. Hermeneutics is after all what one does when studying a religious text as if it were a natural object and not the product of human hands. Hermeneutical exegesis of the Bible was not the same as literary or art criticism. Though both activities aim at meaning, from where that meaning issues is different, which also entails that the idea of “meaning” itself in both cases is different. In hermeneutics, the Bible is taken to be the word of God (or at least a reasonable facsimile of it). It is the word of creation itself, much as we would consider “nature” today, or the “physical world.” We don’t ask after what nature or the physical world mean, we try to figure out what they are and how they work. Hermeneutics, at least within the religious context, should be considered similarly: as a study not of what the divine means, but what it is in itself, how it manifests in the world and, at the epistemic level, how we know it. At the time, say, prior to the scientific revolutions of the seventeenth century or at least the Enlightenment moment of the eighteenth, one would simply take it for granted that what there is in the world was given by the divine, written by God. Our task, like the task of humans in some future that comes after the advent of metahumans, is to understand it — the nature, the world, God — on its own terms.
Now, the difficulty arrives when what is in that world is not just the mute stuff of mineral and organic matter in various states of greater or lesser organization and which are regarded as the product of physical or even divine law. In Chiang’s story, what comes to take the place of human science is a hermeneutic of the products of metahuman activity. A useful analogy would be to artifacts of some long past and collapsed civilization for which we have no proven Rosetta Stone: things that appear to us as having been made, but whose purpose and use remain a mystery and so the subject of inquiry and theorizing — in other words, Archaeology, or at least that subfield of it that approaches and probes the limits of its own claims to empiricism.
And what distinguishes archaeology from, say, geology, or climatology, what makes it a field in the humanities and not a field in the sciences (or, in Chiang’s world, a field of human science and not a field of metahuman science), is the reigning assumption that its objects of study were made by what Steven Knapp and Walter Benn Michaels long ago, in their polemic “Against Theory” (1982), called an “intentional agent.” Knapp and Michaels were at that time concerned with literary criticism and the idea that “theory” — understood by them as a project that either argued for the validity of certain kinds or methods of interpretation of literary texts or against the very possibility of interpretation5 — could underwrite and thus validate the concept of an “intentionless language.”
The story Knapp and Michaels tell goes like this: imagine you are walking on the beach and come across a “curious sequence of squiggles” in the sand that actually spell out the first stanza of Wordsworth’s poem “A Slumber Did My Spirit Seal.” As the authors state, “this would seem to be a good case of intentionless meaning,” because, in being able to read these “squiggles” and to recognize them as words and even perhaps understanding them as a “rhymed poetic stanza,” the question of intention, of who is responsible for the squiggles, would seem irrelevant. But then, as you’re looking at these lines in the sand, a wave washes up on the shore and as it recedes it reveals a different sequence of squiggles, this time looking very much like the second stanza of Wordsworth’s poem. What once appeared to have meaning but no intent, now appears to have no intent but no agency either. The appearance of these lines in the sand were merely accidental, and as such, “what you thought was poetry,” write Knapp and Michaels, “isn’t poetry because isn’t language.” Because you didn’t know who the author was, doesn’t meant the marks weren’t intended; but once the marks are accidents, their is no author and thus no meaning. If there is no author, and no meaning, then the question of intention isn’t just irrelevant, it can’t even be asked.
Though this may seem to retread terrain that had been fought over years ago about the “death of the author” and so forth, it’s worth recalling where Knapp and Michaels take this line of argument in “Against Theory,” and it deserves quoting at length:
If our example has seemed farfetched, it is only because there is seldom occasion in our culture to wonder whether the sea is an intentional agent. But there are cases where the question of intentional agency might be an important and difficult one. Can computers speak? Arguments over this question reproduce exactly the terms of our example. Since computers are machines, the issue of whether they can speak seems to hinge on the possibility of intentionless language. But our example shows that there is no such thing as intentionless language; the only real issue is whether computers are capable of intentions. However this issue may be decided—and our example offers no help in deciding it—the decision will not rest on a theory of meaning but on a judgment as to whether computers can be intentional agents.
The question of whether A.I. will make art is ultimately exactly this judgement of whether A.I. can or will be judged to have intentions. Chiang’s metahumans certainly must, because it is assumed in “The Evolution of Human Science” that the products of metahuman science do things in the world, or to the world, that humans are left to interpret through their hermeneutics.
The question that one wants to ask of Chiang is this: if metahumans are intentional agents, then what they do is meaningful, which means art is something they could make. It’s even quite possible that what humans mistake as metahuman science and try to interpret as such is really metahuman art. But for Chiang, “metahuman” is just another name for “A.I.” Why the former is capable of making art and the latter is not is a question for Chiang; it’s not a question his critics are interested in even asking.
But there’s another question that’s worth asking, and that is this: If metahumans made art, would it appear to us like accidents or artifacts, and between the two, where would we even know to draw the line?
I don’t know if this marks a full change of heart on Chiang’s part from how he treated the idea of metahuman superintelligence in his short story of 2000. In that story, metahumans aren’t actually introduced as machines but as something more like an evolutionary step beyond humans, more like a new species. But the distinction between what would make for superintelligent organic beings vs. superintelligent machines is irrelevant to the question of what kind of art or science such a supertintelligence would produce and how different it would be from human art or science.
This is both the strength and weakness of Nam June Paik’s early video sculptures: he was too fascinated with the box and not fascinated enough with the apparatus of video as a technology of mediation and immediacy.
Though on this account, there is great promise in ChatGPT’s tendency to “hallucinate.”
The popularity of Cixin Liu’s Three Body Problem is notable because it challenges the anthropic bias not only on the issue of intelligence but on temporality as well, a further deflation of our conventional frames of subjective experience.
Susan Sontag’s “Against Interpretation” (1964) was an early entry in the catalogue of arguments for this view. As she wrote: “The function of criticism should be to show how [the work of art] is what it is, even that it is what it is, rather than to show what it means.” The 1960s marked a moment when the idea of art, “what it is,” was newly up for debate, so making the case for something as art could be considered one purpose of criticism, but it would be a mistake, on Knapp and Michaels view, to confuse making the case for art as distinct from making the case for what it means.
Chiang's work explores the interestingness of consciousness. His baseline is that robots don't have them. So, I think it's not surprising that he's not hot on AI. I'm interested in your defense of it. I wonder about the premise of "its labor-saving promises." This has not been my experience with the innovation. It's made me work much harder. I think the idea of "less work" is a white male (sorry!) fantasy, one which I heard especially around quarantine, when the idea of being served by first responders is kind of a childhood regression of the need for a mother. Most people who do the providing feel very differently about labor, myself included, which is that to have paid labor with integrity is the basis of self-esteem and community and connection. I don't think AI is incompatible with this need. But if it's not directed toward the needs of people who would least profit from the product, it will only become another capitalistic tool.