Why Pundits Get Things Wrong
The best predictor was fame: the more feted by the media, the worse a pundit's accuracy.
Pointing out how often pundits' predictions are not only wrong but egregiously wrong—a 36,000 Dow! euphoric Iraqis welcoming American soldiers with flowers!—is like shooting fish in a barrel, except in this case the fish refuse to die. No matter how often they miss the mark, pundits just won't shut up, and I'll lay even odds that the pundits (and pollsters) who predicted a big defeat for Tzipi Livni in the Israeli elections last week didn't slink away in shame after her party outpolled all others. The fact that being chronically, 180-degrees wrong does not disqualify pundits is in large part the media's fault: cable news, talk radio and the blogosphere need all the punditry they can rustle up, track records be damned. But while we can't shut pundits up, we can identify those more likely to have an accurate crystal ball when it comes to forecasts from the effect of the stimulus bill to the likelihood of civil unrest in China. Knowing who's likely to be right comes down to something psychologists call cognitive style, and with that in mind Philip Tetlock, a research psychologist at Stanford University, would like to introduce you to foxes and hedgehogs.
At first, Tetlock's ongoing study of 82,361 predictions by 284 pundits (most but not all of them American) came up empty. He initially looked at whether accuracy was related to having a Ph.D., being an economist or political scientist rather than a blowhard journalist, having policy experience or access to classified information, or being a realist or neocon, liberal or conservative. The answers were no on all counts. The best predictor, in a backward sort of way, was fame: the more feted by the media, the worse a pundit's accuracy. And therein lay Tetlock's first clue. The media's preferred pundits are forceful, confident and decisive, not tentative and balanced. They are, in short, hedgehogs, not foxes.
That bestiary comes from the political philosopher Isaiah Berlin, who in 1953 argued that hedgehogs "know one big thing." They apply that one thing (for instance, that ethnicity and language are primal; ergo, any country that contains many ethnic groups will break up) everywhere, express supreme confidence in their forecasts, dismiss opposing views and are drawn to top-down arguments deduced from that Big Idea. Foxes, in contrast, "know many things," as Berlin put it. They consider competing views, make bottom-up inductive arguments from an array of facts and doubt the power of Big Ideas. "The hedgehog-fox dimension did what none of the other traits did," says Tetlock, who described the study in his 2005 book "Expert Political Judgment": "distinguish more accurate forecasters from less accurate ones" in both politics (will Iraq break up?) and economics (whither unemployment?).
In short, what experts think matters far less than how they think, or their cognitive style. At one extreme, hedgehogs seek certainty and closure, dismiss information that undercuts their preconceptions and embrace evidence that reinforces them, in what is called "belief defense and bolstering." At the other extreme, foxes are cognitively flexible, modest and open to self-criticism. White House economics czar Larry Summers is seldom accused of having a modest personality, but he displays the fox's cognitive style: in briefing the president, he assigns numerical probabilities to possible outcomes of economic policies, rather than saying, "This will [or will not] happen." Similarly, Yale economist Robert Shiller, who forecast the bursting of both the tech bubble in 2000 and the housing bubble in 2006, deploys a flexible cognitive style that works from the data up and not from one Big Idea down. Here's how to identify fauna: foxes pepper their speech and writing with "however" and "but," recognizing uncertainty in the face of competing forces. Hedgehogs suffer from no such doubts, which (combined with their adherence to a Big Idea) makes them especially prone to overpredict change: the House of Saud will fall, the European Monetary Union will collapse, Canada will disintegrate like Yugoslavia—in the last case, from the primal force of ethnicity. Leftist hedgehogs, applying the Big Idea that those who oppose dictators are virtuous, failed to foresee the fierce repressiveness of Iran's 1979 revolution, which overthrew the shah; applying the Big Idea that involvement in regional war = quagmire, they predicted that the first Gulf war would last 20 years and claim 50,000 American lives. Oops.
The media, of course, eat this up. Bold, decisive assertions make better sound bites; bombast, swagger and certainty make for better TV. As a result, the marketplace of ideas does not punish poor punditry. Few of us even remember who got what wrong. We are instead impressed by credentials, affiliation, fame and even looks—traits that have no bearing on a pundit's accuracy.
The truly bad news for forecasters, however, is that although foxes beat hedgehogs, math often beats all but the best foxes. If there are three possibilities (say, that China will experience more, less or the same amount of civil unrest), throwing darts at targets representing each one produces a forecast more accurate than most pundits'. Simply extrapolating from recent data on, say, economic output does even better. But booking statistical models on talk shows probably wouldn't help their ratings.