top of page

How the wisdom of the crowd works - and why it can fail

[PICTURE]

A few weeks ago, we published a blog post about prediction markets and the forecasting power they hold. These markets draw some of their predictive power from a phenomenon called the wisdom of the crowd: the tendency for aggregated estimations or forecasts generated by large numbers of people (typically the median or mean of their responses) to come closer to the truth than any individual response. While the wisdom of the crowd can be startlingly accurate at times, it can fail miserably at others. Today we're going to take a look at why, through the lens of a new study that we conducted.

The wisdom of the crowd works by canceling out the individual biases of the participating crowd members. If a large group of people are guessing how many jelly beans are in a large jar, for instance, it's very likely that some proportion of them will overestimate the total and a some proportion will underestimate the total. However, these two groups will balance each other when you look at the mean or median guess of the whole crowd. This process can produce surprisingly accurate results for questions that individuals would stand little chance of nailing. In one famous 1906 incident, a large group of English fairgoers guessed the weight of a dressed ox; their median response was accurate to within 1% of its actual weight, despite the fact that most of the individual guesses fell far from the truth. This bias-canceling process is an important feature of why prediction markets work too, and is also common to crowdsourced information resources like Wikipedia and Quora.

However, when crowds are faced with less intuitive questions, they can fail as dramatically as any individual. ClearerThinking founder Spencer Greenberg recently conducted a brief study that illustrates how and why crowds can turn "dumb."

Participants in the study were ask to answer the following question without the aid of a calculator:

"Sally invested $60 in the stock market 30 years ago. It has increased by about 10% per year (annualized return). How many dollars is it today?"

If you'd like, take a moment to come up with your own guess before you read on.

...

This question is essentially about exponential growth, which many people find difficult to mentally calculate (and for good reason!). As a result, their intuitions tend to be less reliable and less well-balanced when asked to make estimates about exponential growth. As a result, the crowd wisdom for this question whiffed on the truth, as you can see here:

[PICTURE]

The vast majority of participants underestimated how much Sally's money would grow. As a result, their median answer was way too low — just $360, when the true answer was $1,047. (Sally's $30 would grow to around $360 in about 19 years, rather than 30.) And while a small number of individuals were wise enough to suspect that the true answer would be rather higher than that, their instincts led them to dramatically overshoot the mark. A handful of them guessed so high that they fell outside the distribution in the chart entirely, pulling the group mean (as opposed to median) up to $1,900.

As in the ox and jellybean cases, individuals in this experiment split largely into two camps: people whose biases led them to guess too low, and people whose biases led them to guess too high. The difference is that this time, the biased groups didn't balance each other. There were too many people who guessed low, thereby causing the median to fall well below its target. Meanwhile, the small number of people who guessed too high were so wildly wrong that it dragged the average guess of the entire group away from the right answer too.

You can see an illustration of this effect in the chart below. The x-axis illustrates the rank of each individual guess (the smallest is furthest left, followed by the second smallest, etc.), while the y-axis shows the size of each individual guess ($200, for example) on a logarithmic scale. The handful of guessers who fall between the two red horizontal lines came closest to the correct answer. You can also see the way that the people who guessed too low missed according to a different pattern from those who guessed too high, represented by the changing slope of the regression line above and below the red horizontal lines:

[PICTURE]

The bottom line is that the wisdom of crowds is only valuable when bias balance causes people's answers to cluster around the true answer. The top bullseye in the graph below illustrates this kind of situation. Even though the individual answers mostly miss, they're clustered around the right answer, which means the aggregate answer comes near to the truth. In situations where biases don't cancel out, you end up with a "dumb" crowd, as in the bottom bullseye — people's answers are systematically biased away from the truth. In this situation, the wisdom of the crowd is only better than about 50% of the individual answers...so you might as well just ask a random individual and hope they're smarter than average.

[PICTURE]

In the real world, this kind of imbalance can have serious consequences. It's part of why financial markets can suddenly crash, for instance. In certain situations, cognitive biases shared by many people can undo the typically reliable self-correcting mechanisms inherent to market dynamics. This kind of surprising failure is both a good reason to take wisdom-of-crowd estimations with a grain of salt for more complex questions, and a reminder of why learning to identify and challenge your own biases is so important.

bottom of page