Ordinary people, sufficiently motivated, are capable of finding insights that match or exceed career experts.
It’s no secret that experts are often wrong. Anyone can make a bad call now and then, but at least in theory an “expert” should make far fewer mistakes than the rest of us. But that’s true less often than we might expect.
Social scientists surveyed before and after the Covid-19 pandemic performed no better than non-experts when asked to make predictions about its effects on society.1
Look at the so-called “replication crisis” in science. One estimate is that perhaps half of all U.S. preclinical biomedical research – $28 Billion worth – is wasted because once published, nobody else is able to replicate the result.2
Some apparent incompetence may simply be a case of over-confidence. This so-called Dunning-Kruger effect is probably over-publicized by experts to laud their superior thinking skills, but common sense tells us that many people exaggerate their own abilities. But are they really exaggerating, or are they simply showing their determination to solve a difficult problem:
Overconfidence has three different meanings3:
Overprecision: belief that the prediction is more accurate than it is. This is the error found in tests that ask you to “guess the length of the Nile in miles”.
Overplacement: the belief that we perform better than others.
Overestimation: Belief that we can perform at a higher level than what is objectively warranted.
But this is precisely the area where a positive attitude can actually improve your performance.
Note that people who suffer from overestimation (e.g. 90% of drivers think they’re above average), aren’t necessarily being haughty. Ask them to rank their drawing (or singing) ability, for example, and you’ll often find they rank themselves below average.
For things we can’t control (e.g. weather, the S&P500) there’s nothing to be gained from overestimation. But when we can influence events, then optimism and a bias for action are good things. Think about what would happen if a beleaguered military commander gave his misfit troops an honest assessment of how they were likely to perform on the battlefield.
Nullius in verba (“Take nobody’s word for it”) is the motto of the Royal Society, established in 1660. But how often is this the watchword of today’s practicing scientists? The standard response in most discussions of expertise is to point to “the peer-reviewed literature”, as if that were the final answer.
The distinguishing feature of an expert – the attribute that make him (or her) better than the rest of us is experience. An expert has simply seen more cases than the rest of us.
A coffee expert, for example, has presumably tasted or otherwise worked with far more varieties than the rest of us. It might be possible – though extremely difficult – to become a coffee expert without ever having tasted the drink. Somebody with health or religious objections to coffee-drinking might, for example, find himself owning a business that requires he learn much more about the varietals than the average person. But he will never be an expert without repeated exposure to something. In fact, you could say that “expertise” is another word for distilled experience.
Of course, experience alone is not the answer. An expert must apply his observations to new, ever-specific situations. Exposure to an unchanging sequence of items doesn’t stretch the abilities of the would-be expert. Sometimes called deliberate practice, to be a true master of a skill requires sustained, focused attention through a variety of situations and conditions.
Although experience is the main attribute that separates the average person from an expert, the motivated personal scientist has one big advantage: the requirements are personal. Nobody will be as much of an expert on you as you.
Hutcherson, C., Sharpinskyi, C., Varnum, M. E. W., Rotella, A. M., Wormley, A., Tay, L., & Grossmann, I. (2021). The pandemic fallacy: Inaccuracy of social scientists' and lay judgments about COVID-19’s societal consequences in America [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/g8f9s ↩︎
Freedman, L. P., Cockburn, I. M., & Simcoe, T. S. (2015). The Economics of Reproducibility in Preclinical Research. PLOS Biology, 13(6), e1002165. https://doi.org/10.1371/journal.pbio.1002165 ↩︎
See Moore, D. A., & Healy, P. J. (2008). The trouble with overconfidence. Psychological Review, 115(2), 502–517. https://doi.org/10.1037/0033-295X.115.2.502] ↩︎