Article

The Value of Publishing All Research Findings

August 23, 2016

Recently, a study was published that recovered unpublished documents and raw data from almost 50 years ago between 1968 and 1973. The results provided one more (big) piece to the dietary fat puzzle. Before I could put words to paper myself, the Chicago Tribune published this interesting editorial about the new findings, in which the editors wonder “why such a rigorous experiment was essentially buried from public view for so long.”

The study found that replacing saturated fat in the diet with vegetable oil rich in linoleic acid (i.e., corn oil) lowered blood cholesterol, but this unexpectedly did not improve survival. Additionally, and contrary to what was hypothesized, participants who had the greatest reductions in blood cholesterol had a higher, rather than lower, risk of death, which appeared to be driven by an older subgroup 65 years and older. In light of these findings, the results from some recent studies regarding saturated fat consumption not being associated with cardiovascular disease as once thought now make more sense, or perhaps reveal the complexity of the diet-heart relationship.

What struck me most about this paper wasn’t just the actual results, but rather what its publication, decades after the collection of the data, says about the scientific process in general. The Chicago Tribune editorial suggests that reluctance to publish scientific results defying the conventional wisdom of the time could be due to several factors, including potential harm to a scientist’s career for results that contradict accepted hypotheses, or because journals and journal editors who approve publication may have a vested interest in these same hypotheses.

If these results were published promptly, how might the trajectory of research over the subsequent decades, and by extension the way in which we viewed the science and implemented dietary recommendations, have been different? Ultimately this is a question for which we will never fully know the answer. Importantly, not only may the path of science potentially suffer with unpublished research findings, but also the public’s understanding of – and perhaps trust in – nutrition science in general. Ultimately, the public is affected by misdirected food and nutrition advice from incomplete research. If we expect the public to have trust in dietary recommendations, then those recommendations must be based on the totality of evidence and not just the evidence that supports certain hypotheses.

Aside from the actual study results, if we can take one lesson from this study it’s the importance to publish all studies, whether they find significant effects or associations that confirm the hypothesis, report null findings or run counter to all the prevailing information and conventional wisdom of the time. Researchers may be hesitant to stake out ground that runs counter to consensus, particularly when reputation and funding opportunities are at stake. As we look at the history of how science has evolved, sometimes careers can be made or broken in this way. However, the courage to bring forth a counter argument and add to healthy scientific debate should be encouraged.

Moving forward, I hope the counter argument is more openly embraced. Rather than hinder opportunities for further research funding or publication, putting forth hypotheses that are counter to the existing collective thinking should enhance opportunities for funding, or increase chances of publication. This will add to the scientific debate, not detract from it, and enhance the public trust in nutrition science. And only with healthy scientific debate will we truly be on a path to enhance our knowledge. 

comments