The Market for Wisdom
A review of Oracles: How Prediction Markets Turn Employees into Visionaries, by Donald N. Thompson
In The Big Short: Inside the Doomsday Machine, Michael Lewis celebrated the brave few who, prior to the crash of 2008, refused to swallow the Kool-Aid being ladled up from Wall Street’s sterling silver punchbowl. “The best way to make money on Wall Street,” one of these mavericks dared to think, “was to seek out whatever it was that Wall Street believed was least likely to happen, and bet on its happening.”
Of course, in 2008, some mavericks turned out to be right. But as influential as The Big Short has been in shaping popular perceptions of the crash, it is misleading.
Not false. But misleading. There are always mavericks convinced that their judgement is superior to that of the market, but very few are profiled by Michael Lewis because the rest learn the hard way that they are wrong: putting your money on the markets is not a sure thing, but it is quite a safe bet. As is made clear in Oracles: How Prediction Markets Turn Employees into Visionaries, a new book by Donald N. Thompson, economist and emeritus professor of marketing at York University’s Schulich School of Business, that is not because the expensively dressed chaps on Wall Street are as clever as they think they are. It is something much more elementary—something illustrated by a famous story involving the British scientist Francis Galton.
One day in 1906, Galton strolled about a country fair, when he came across an unfortunate ox that was soon to be slaughtered and dressed. Passersby were invited to guess the weight of the beast. The person who came closest to the mark would win the carcass, a prize that was, presumably, more desirable in 1906.
Eight hundred people gave it a go. The ox met its untimely end and the correct answer was revealed: 1,198 pounds. The winner was a man who guessed 1,170 pounds. But it occurred to Galton to ask the contest organizers for the tickets on which people had written their estimates. He took them home, did some simple math and discovered that the average guess—the collective judgement of everyone who took part—was 1,197 pounds. The crowd was bang on.
Or at least that is how this story was framed in James Surowiecki’s 2004 bestseller The Wisdom of Crowds, an excellent book with an unfortunate title that has become the even more unfortunate shorthand for describing the phenomenon of collective insight. “Crowd” is not the most apt word. It suggests lots of people, often similar to one another, in the same physical space, talking together, sometimes noisily. That sort of crowd is subject to groupthink, information cascades and other psychologically rooted dynamics that can make it anything but wise.
But the people guessing the doomed ox’s weight in 1906 were not that sort of crowd. They were a diverse lot. They did not sit around a table to discuss the issue. They had no leader declaring a certain choice to be the right one, no boss hinting they should get with the program, no charismatic individual unduly influencing their thoughts. They simply judged, individually, as best they could. And when all their judgements were aggregated and distilled into a single judgement, they were, collectively, bang on.
It almost sounds magical, but there was no hocus-pocus involved. As happens so often, valid information was widely dispersed. Maybe one person had some experience with oxen. Another knew a little about butchering animals. And so on. Aggregating their judgements effectively aggregated the valid information they possessed. Errors were also widely dispersed, of course—some people may not have known which end of the ox eats grass—but these skew judgements in different directions and so they tend to cancel each other out.
This phenomenon has been demonstrated in a dazzlingly wide array of fields. The average of many polls is very likely to be more accurate than any one poll, for example. Groups routinely beat movie critics at picking Oscar winners. And in a 1968 incident made famous by Surowiecki, the United States Navy asked a diverse group to estimate the location of a submarine lost and presumed wrecked. None came close to the mark. But the collective estimate was only 183 metres from where the submarine was ultimately found.
Of course there is much more going on in a modern market—equity, commodity or currency—but the basic dynamic of information aggregation is the same. Markets are far from perfect, but they are remarkably good at processing information and drawing accurate conclusions. This invites an obvious question: can we construct markets in other fields and use them to create better-informed, more-accurate judgements?
In Oracles, Donald Thompson answers with an emphatic yes.
One of the first of what are now usually called prediction markets was created in 1988, when a small group of investors was asked to buy contracts on the outcome of the presidential election, which meant the value of the contracts would reflect the market’s collective judgement about the candidates’ chances. The 800 people involved not only called the election, they were, collectively, more accurate than national polling conducted by major corporations. In the years since, this became the flourishing “Iowa Electronic Markets,” which spawned a host of competitors. Corporations such as Google have also internalized the concept, using prediction markets and their own employees’ distributed knowledge to test sales forecasts, shipping dates, new product ideas and competitors’ responses.
As one reads Oracles, it quickly becomes obvious that with a little imagination this is an idea that could have countless applications in many different fields. And yet few corporations have embraced the idea even to the extent that Google has, while corporations such as Rite-Solutions, a high-tech defence contractor that has made internal prediction markets a central part of its operations, are rare. Worse, prediction markets have made almost no headway in the public sector even though governments routinely ask questions—Will we meet our greenhouse gas emission targets?—that are perfectly suited to analysis by internal or external markets. There just is not the enthusiasm the evidence would seem to warrant.
Thompson is enthusiastic, however, probably too much so. Aside from being blandly written and occasionally disorganized, Oracles suffers from the author’s decided lack of skepticism. He gushes about a chart showing the dazzling accuracy of Google’s prediction market, for example, without mentioning that there is a significant failure: where the market said there was a 100 percent chance of something happening, it only happened 80 percent of the time, a level of overconfidence just as bad as researchers routinely find among ordinary people making individual judgements. That does not nullify the broader point. But it is an important caveat.
Thompson is at his best in discussing why prediction markets have not spread more widely. A big part of the problem, he concludes, is that prediction markets threaten to expose nonsense and tell leaders what they do not want to hear. For a flexible, creative organization like Google, that is acceptable. But most politicians and far too many corporate executives lead calcified organizations that exist primarily to serve themselves. Accuracy and innovation are not their priorities. Preserving the status quo is.
There is also the problem of hubris. “We freely acknowledge that we are not the two smartest people in the company,” one of the co-founders of Rite-Solutions told Thompson. That humility can be found wherever prediction markets have been tried. Executives must acknowledge that they do not have all the answers, that their judgement is not necessarily the best available, that people far beneath them may have valuable insights to contribute.
Unfortunately, that humility is all too rare in boardrooms, which tend to be occupied by people who believe they are—or at least pretend to be—the rare visionary who can beat the market.