Hybrid Intelligence

Do computers and humans actually think better together?

The smartest thing anyone has said about our relationship with technology is this: “Technology is neither good nor bad; nor is it neutral.” That deceptively simple dictum is number one in a list of laws composed in the mid 1980s by history professor Melvin Kranzberg. I believe in Kranzberg’s law because so often we swing between extremes when thinking about the internet and its cronies: there will always be the Nicholas Carrs of the world, reminding us in books like The Shallows: What the Internet Is Doing to Our Brains that our very patterns of thought are being tampered with each time we bathe our brains in an online experience; and there will always be fervent optimists, too, hallooing from their Silicon Valley offices about the glories of the coming singularity, after which point obliging machinery will bond with our brains and deliver us from the drudgery of our biological selves. Kranzberg’s law states that things are always more complicated, that there is nothing good or bad in the machines we create, but our approach to them makes it so. A tweet is both benign and dangerous at once.

Clive Thompson’s new book, Smarter Than You Think: How Technology Is Changing Our Minds for the Better, is decidedly optimistic—perhaps in reaction to the many books about technology that tend toward the dystopian—but I was happily surprised to find that Thompson is anything but a cheerleader. Like Kranzberg, he believes that no technology determines our fate; it is our actions and our will that do. Moralizing about a technology’s inherent “goodness” or “evil” is a waste of time.

That said, Thompson takes it as his mission in Smarter to offset the grumbling Luddites among us. Buffeted as we are by extreme change, it has become easy and tempting to assume that YouTube is making us all into distracted fools and Facebook is reducing our storytelling to base (and debased) elements. To counter these fears, Thompson introduces the “centaur” theory, in which a sort of hybrid mind—half human, half computer—can accomplish more than either can on its own.

He describes a pivotal centaur moment early on. When chess grand master Garry Kasparov was defeated by IBM’s Deep Blue supercomputer in 1997, Kasparov had the “audacious” idea that, if he could not beat them, he might as well join them:

What would happen if, instead of competing against one another, humans and computers collaborated? What if they played on teams together—one computer and a human facing off against another human and a computer? That way, he theorized, each might benefit from the other’s peculiar powers. The computer would bring the lightning-fast—if uncreative—ability to analyze zillions of moves, while the human would bring intuition and insight.

What happens when such centaurs are born? It turns out that the best chess-playing hybrids do not necessarily include chess masters as their human components. Rather, selecting a human who is an expert at “driving” the computer will produce optimal results. But what does driving the computer mean, exactly, when a driver does not need to know the road that is being driven—does not need to be an expert, say, at chess? While Thompson offers examples throughout his well-researched book of humans “collaborating” with computers to achieve brave new results, I often found myself wondering whether the value of that result had been altered, too. For example, I may be able to lift a great deal more with a forklift than with my own muscles, but that does not mean that purchasing a forklift makes me a stronger man.

Think of the story Seneca tells about a rich man called Calvisius Sabinus, who had a crummy memory but would purchase slaves and make them memorize books, effectively creating a human Google system that he could search at will. Sabinus thought he had built himself a superior memory because he owned the slaves and therefore their “functionality” was his. An arch friend undid this fallacy by suggesting that, given how many strong slaves he had, frail Sabinus ought to try his hand at wrestling.

Thompson is no foolish Sabinus. But I did sometimes feel he was conflating a result (winning a chess game) with an accomplishment (being a great chess player). When discussing the enormous boon to memory that computers provide, for example, he writes that “pulling up ‘Tori Amos’ on Google seems different from looking up a bit of trivia in an encyclopedia. It’s less like consulting a book than like asking someone a question, consulting a supersmart friend who lurks within our phones.” But that way lies a certain danger. Whether we feel intimately connected to digital recall systems or to those transactive memory systems we employ within social circles, we are always pretending to have gained a few IQ points we never really earned. Put another way, we think we know everything on a certain terrain when, in fact, we have simply become adept at remembering where all the signposts are standing—we know who (or what) to ask.

Perhaps complaining about “authentic” smarts versus a facility with Google is to miss the point, though. What I like best about Thompson’s book is the way it encourages readers to not become complacent with the ease of technology-aided solutions but, rather, to use those technological aids to push ourselves into unknown territory. Just as calculators enable us to tackle far more complicated mathematics problems by relieving our minds of arithmetical drudgeries, so our search engines, our data mining, our enormous networks of communication and monitoring should crack open whole new fields of inquiry.

The greatest challenge, though, that Thompson takes on in this book is convincing quasi-Luddites like me that social media like Twitter actually lead to finer understandings of our fellow humans. What qualities, we wonder, are there to gain from such enormous quantity? “Each day,” Thompson notes, “we compose 154 billion e-mails, more than 500 million tweets on Twitter, and over 1 million blog posts and 1.3 million blog comments on WordPress alone.” We are, in some ways, a more voraciously literate culture than any in history. Before the internet, as Thompson notes, most people never had the means to publish any of their ideas whatsoever. Most, after a few dodgy essays in high school, became passive consumers of other people’s texts for the remainder of their lives. But today we are all authors (and, more worryingly for the elitists among us, we are all critics, too).

What does such a plethora of text, from such a hodgepodge of writers, actually give us? On the individual level, argues Thompson, it clarifies the mind. E.M. Forster asked the question, “How can I tell what I think till I see what I say?” and similarly, Thompson argues an age of constant publication gives everyone the chance to parse out their ideas in the light of public scrutiny. Writing about his blogger friends, Thompsons says, “pretty soon they think about the fact that someone’s going to read this as soon as it’s posted. And suddenly all the weak points in their argument, their clichés and lazy, autofill thinking, become painfully obvious.” I hope, rather than believe, that this is the case in the so-called “comment fields” where public debates now rage.

Thompson finds that constant broadcasting produces another mental boon: what social scientists have dubbed “ambient awareness.” The perpetual ability to check in on our social network (to know that Susan enjoyed her ham sandwich 20 minutes ago, or that Kenny is running late for a screening of Disney’s Frozen) can “allow us to socially ‘groom’ one another, in the way that primates groom one another physically.” This textual grooming—our idle text message smiley faces and easy likes on Facebook walls—may be banal, but it is not so different from the banality of everyday conversation. Whether there is a meaningful difference between private banality and broadcasted banality, however, is another question. Nevertheless, Thompson suggests that “when critics freak out about the triviality of online mobs, they are probably not listening carefully to their own daily talk.”

Thompson offers a smart historical comparison to diffuse those critics: in a 1673 pamphlet about the rise of coffee houses (which were the 17th-century equivalent of online forums), “the author savages the social hangout for being ‘an exchange, where haberdashers of political small-wares meet, and mutually abuse each other, and the publick, with bottomless stories, and headless notions’.” The coffee house, when it first arrived, was seen as “the rendezvous of idle pamphlets, and persons more idly employed.”

To be sure, each new wave of communication technology—from the invention of writing, which threw Socrates into a tizzy, to the invention of telegraphs and telephones and televisions—has left critics scrambling to describe the downfall of a “finer way of being” now usurped by noisier, more crowded realities. But I think those worriers should not be altogether discounted. New technologies are not just fun new “add-ons”; they actually do destroy old ways of communicating, and thus old ways of knowing ourselves. As McLuhan told us back in the 1960s: “A new medium is never an addition to an old one, nor does it leave the old one in peace. It never ceases to oppress the older media until it finds new shapes and positions for them.” But Thompson’s rejoinder, that this is the way of the world and not the end of it, comes as a needful salve to the fretting that these arguments can otherwise devolve into.

In historical comparison, there is often great solace. We see that we are not the first to be stricken with new media, nor with its attendant anxieties. In seeing that the advent of the internet is not without precedent, the question changes from the shrill “what have we done?” to the far more interesting “what shall we do with this?” And it is that kind of levelheadedness that Thompson’s book is especially good at delivering. Along the way, we learn about past inventions such as the Dewey decimal system and the Mundaneum of 1910, both of which sought to organize the world’s flood of content. We learn that our struggle is an eternal and human one, not something unique to the harried present.

Thompson believes that our evolving relationship with technologies will actually make us smarter. He believes that the centaurs we are becoming—part human, part computer—with offloaded “memory” and an “ambient awareness” of our social network via media like Twitter and Facebook will ultimately bring us fruitful new ways of thinking about the world and new ways of seeing ourselves. That may be—to a point. But sharing and chatting and googling can only take you so far. For myself, I am with Edward Gibbon, who said, “conversation enriches the understanding, but solitude is the school of genius.” Despite the undeniable utility of our technological aids, we do need to save some private workshop of the mind in which to do our greatest, our most meaningful work. In the end I was not sure how far Thompson would agree with that point.

In his final chapter he notes that online technologies can be used by a despotic leader to oppress a nation just as easily as they can be used to promote an “Arab Spring.” But Thompson believes—and I applaud the position—that we are fumbling toward a brighter future, in our way. By marshalling his many years of experience as a technology critic, and by balancing his optimism with regular counter-arguments, Thompson builds a unique contribution to the larger conversation we are all having about online life. For anyone who has wondered where the independent optimists are in our debates about technology, Smarter Than You Think comes as a welcome arrival. Yes, I wanted to argue with him more than once—but that is a sign that an author has done his job admirably.