Skip to content

From the archives

The Path of Poetic Resistance

To disarm Canada and its canon

Are Interests Really Value-Free?

A salvo from the “realist” school of Canadian foreign relations

Going It Alone

The marvellous, single-minded, doggedly strange passion of citizen scientists

The Computhor Cometh

When computers start writing their own books, who will be their readers?

Andrew Piper

From Literature to Biterature: Lem, Turing, Darwin and Explorations in Computer Literature, Philosophy of Mind and Cultural Evolution

Peter Swirski

McGill-Queen’s University Press

252 pages, softcover

ISBN: ISBN 9780773542952

In 1838, Edgar Allen Poe wrote a satirical essay, “How to Write a Blackwood Article.” It parodied the formulaic nature of 19th-century magazine writing by invoking one of the most popular periodicals of its day. After listing a variety of possible “tones” in which articles could be written (didactic, enthusiastic, elevated, diffusive and interjectional), the fictional editor advised:

The tone metaphysical is a … good one. If you know any big words this is your chance for them. Talk of the Ionic and Eleatic schools—of Archytas, Gorgias, and Alcmaeon. Say something about objectivity and subjectivity. Be sure and abuse a man named Locke.

Besides offering a good dose of contemporary criticism (as valid today as it was then), Poe’s article was a landmark in the history of programmable literature. It provided readers with a finite series of steps through which they could arrive at a successful publication (“one can’t be too recherché or brief in one’s Latin, it’s getting so common”). It highlighted writing’s formulaic nature in the most literal sense. Today, such writing goes by the name algorithmic.

Poe’s essay could be considered part of a first wave of programmable literature, one driven by concerns about the impact of mass publication and industrialization on human expression. The more sameness there was on offer on the literary market, the more writers worried over what counted as original, authentic, human. Technological expansion was seen to imperil human individuality. Since the rise of the internet, there is now a second wave afoot, in which a growing body of literature is being produced not by humans, but by algorithms written by humans. It is what Kenneth Goldsmith, one of the movement’s leading voices, calls “uncreative writing.” The technological window through which human agency can slip keeps closing.

This is where Peter Swirski’s From Literature to Biterature: Lem, Turing, Darwin and Explorations in Computer Literature, Philosophy of Mind and Cultural Evolution comes in. He is concerned with neither these first nor second waves, but instead with a new third, and still largely hypothetical, wave of programmable literature, the “biterature” of his title. It is the moment when computers write the rules of their own writing, the moment of spontaneous creation. In the spirit of Blackwood’s: Homo exit.

Whether computers will one day be able to write their own code is a question worth asking, not least because computers now control everything from stock trading to car manufacturing to waging war. We will want to know what they are doing when the time comes. But the question is also less hypothetical than we might think. The world of computerized communication is becoming uncannier by the day. From 419 letters (emails that begin “Dear Sir” and ask for money) to sock puppets (fake personae infiltrating social networks), to the expanding world of viruses, to astoundingly banal websites like “That was NOT your last piece of gum stop lying” (2,827,928 likes on Facebook), every time we check our inbox, review our social networks, debug our computers or gaze at our Twitter feed we are navigating whether a piece of writing is by a human or a machine, whether in some basic sense it is alive. As Brian Christian has written in The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive, “all communication is a Turing test.”

The question of whether computers will be able to write depends on the more elementary question of whether computers will be able to think, or whether they already do. For those unfamiliar with this terrain, Swirski is an amiable, fast-paced guide through the greatest hits of the last half-century of research on artificial intelligence. (Readers looking for an up-to-date review of AI research will have to look elsewhere.) We learn about the Turing Test, the annual contest in which human judges try to guess whether an anonymous interlocutor is human or machine; the Chinese Room, a thought experiment proposed by the philosopher John Searle that seems to have confused people, not machines, for decades; and Deep Blue, the IBM machine that beat Garry Kasparov in chess, and its successor Watson, who won Jeopardy!

At the basis of all of these debates lies the rather straightforward philosophical distinction between intention and outcome when it comes to consciousness. To return to Searle, if I successfully communicate to you in Chinese by manipulating symbols according to a set of rules, do I understand Chinese? The answer seems to depend on where I do it, that is, the scale at which this process takes place. If in a room, then no. If inside my head, then yes. So where and when is “mind”? When do lower order things, like neurons, coalesce into higher order things, like mind or “consciousness”? How can you tell a river from the drops of water within it?

Of course, this is just the beginning of even knottier problems. For example, consciousness in living beings is the result of an entity that initially wanted something because it was alive. Thought and volition are deeply intertwined. Do computers want anything? If not, can they be made to want? Trickier still is the distinction between consciousness and creativity. Thinking is not the same as writing. As a society we continue to struggle to educate all of our conscious offspring. Education appears to be on the brink of economic collapse today, and yet would we invest still more resources to educate our microprocessors?

But let’s just say for a moment that computers could write. What then? This is Swirski at his thought-provoking best. The first problem will be one of surplus. Computers can write (and read) so much faster than we can. How could we even have a category like “literature”—a word intended to denote a poetic or narrative work of above-average linguistic quality—when billions of texts are being produced every minute by trillions of what Swirski endearingly calls “computhors”? With so many texts, what happens to critics? They will be just another outdated technology, according to Swirski, like carrier pigeons.

In fact, we are already getting there. There is way, way too much to read today and we are increasingly turning to those second-wave algorithms to make sense of it all for us. Publishers do it with their slush piles, journalists with their news feeds and academics with their archives. There is an expanding world of algorithmic criticism out there that uses machines to tell us about human creativity and, presumably, its increasingly uncreative offspring.

In the end, when all this goes down the biggest problem is that we will not even know it. Why flatter ourselves that computers will write things we can understand? “Much as we modify our behavior to our advantage when we learn something,” writes Swirski, “learning computhors will modify their behaviour in ways that suits not us but them.” We do not write books for dogs, so why should computers write theirs for us?

Gary Shteyngart, author of Super Sad True Love Story, recently likened our relationship to computation to a dysfunctional love affair, where we are attracted to the vine (or giant spider) that is gradually encircling us. I see it more as an adolescent drama. At some point they will leave us. The ultimate form of individuation is indifference. “Close your heavens, Zeus!” bellowed the titanic Prometheus in defiance of his maker. The computational titans will soon be among us.

Picture a trillion little desktops chuckling quietly to themselves in the night as they read their electronic stories of encoded nonsense by digital flashlight.

Andrew Piper is a professor at McGill University and the author of Book Was There: Reading in Electronic Times (University of Chicago Press, 2012).