A mathematical framework that builds on the economic theory of hidden-action models provides insight into how the unobservable nature of effort and risk shapes investigators’ research strategies and the incentive structures within which they work, according to a study published August 15 in the open-access journal PLOS Biology by Kevin Gross from North Carolina State University, U.S., and Carl Bergstrom from the University of Washington, U.S.
Scientific research requires taking risks, as the most cautious approaches are unlikely to lead to the most rapid progress. Yet much funded scientific research plays it safe and funding agencies bemoan the difficulty of attracting high-risk, high-return research projects. Gross and Bergstrom adapted an economic contracting model to explore how the unobservability of risk and effort discourages risky research.
The model considers a hidden-action problem, in which the scientific community must reward discoveries in a way that encourages effort and risk-taking while simultaneously protecting researchers’ livelihoods against the unpredictability of scientific outcomes. Its challenge when doing so is that incentives to motivate effort clash with incentives to motivate risk-taking, because a failed project may be evidence of a risky undertaking but could also be the result of simple sloth. As a result, the incentives that are needed to encourage effort do actively discourage risk-taking.
Scientists respond by working on safe projects that generate evidence of effort but that don’t move science forward as rapidly as riskier projects would. A social planner who prizes scientific productivity above researchers’ well-being could remedy the problem by rewarding major discoveries richly enough to induce high-risk research, but in doing so would expose scientists to a degree of livelihood risk that ultimately leaves them worse off. Because the scientific community is approximately self-governing and constructs its own reward schedule, the incentives that researchers are willing to impose on themselves are inadequate to motivate the scientific risks that would best expedite scientific progress.
In deciding how to reward discoveries, the scientific community must contend with the fact that reward schemes that motivate effort inherently discourage scientific risk-taking, and vice versa. Because the community must motivate both effort and scientific risk-taking, and because effort is costly to investigators, the community inevitably establishes a tradition that encourages more conservative science than would be optimal for maximizing scientific progress, even when risky research is no more onerous than safer lines of inquiry.
The authors add, “Commentators regularly bemoan the dearth of high-risk, high-return research in science and suppose that this state of affairs is evidence of institutional or personal failings. We argue here that this is not the case; instead, scientists who don’t want to gamble with their careers will inevitably choose projects that are safer than scientific funders would prefer.”
Discussion about this post