Discussion about this post

User's avatar
Evan Harper's avatar

As someone in the approximate "superforecaster" range there's a big gap between my subjective experience of forecasting and what the generalizability literature seems to say. There's a big emphasis on cognitive and personality factors. But when I try to improve my own forecasting performance, achieving a Zen-like state of contemplation has really nothing to do with it. I am gathering information and trying to better understand the specific domain and the specific question I'm forecasting on. Domain knowledge seems like by far the highest-leverage factor. This leads to a puzzle because formal domain expertise doesn't seem to predict forecasting success basically at all.

I can think of several ways to resolve the puzzle:

* Domain experts don't really have domain expertise. "Experts" means mostly academics, whose real expertise is in abstruse theoretical frameworks with limited applicability to the real world. Their domain-level expertise is gotten informally "on the job" and is in principle readily available to anyone; they are frequently outdone on this by randos.

* Domain experts have legitimate domain expertise but they're sufficiently good about disseminating it that a non-expert can quickly gain most of the predictive benefit of expertise by skimming the expert literature and looking at secondary sources who have themselves done so. Basically, domain experts are searching depth-first so as to mine PhDs and citations, whereas non-experts trying to make practical forecasts need a breadth-first view at which PhD experts don't greatly outperform them.

* Domain experts have legitimate domain expertise that is *not* readily accessible to nonexperts such as forecasters, but few domain experts *also* have the cognitive and personality traits to be expert forecasters, without which their expertise is useless for forecasting purposes and does not show up in the data. Domain experts also face poor professional incentives for accurate forecasts, such as returns to overconfidence, safety-in-herding behavior, etc. In principle, expert forecasters who also became legitimate domain experts would do even better at forecasting.

* Domain experts have legitimate domain expertise, *and* this expertise is not easily sharable to nonexperts such as generalist forecasters, *and* many domain experts make legitimately good expert forecasters, *but* those people are so valuable that they take their abilities private and never develop the formal domain credentials to show up in research as "domain experts" to the true extent of their expertise. Neither credentialed experts nor "smart generalists" actually outperform these people, they're just not *the same* people who show up in Tetlock-style surveys of "experts".

In practice it probably has to be most or all of these factors working together in combination, yeah?

Expand full comment
A bird's avatar

Need for cognition is probably the skill for thinking

Expand full comment
4 more comments...

No posts

Ready for more?