Tuesday, April 23, 2013

Interpreting science at #EB2013 in Boston

While enjoying the excellent sessions sponsored by the American Society of Nutrition (ASN) at the Experimental Biology 2013 meetings here in Boston this week, I was struck once again by the way actual nutrition science research results are filtered or digested into short memes of conventional wisdom before they reach the public.

This filtering process is necessary, unavoidable, and even healthy.  And yet it is a key step, which brings politics and interest into the process of producing nutrition policy and dietary guidance.

Here is a passage from my chapter on Dietary Guidance in Food Policy in the United States: An Introduction (Routledge/Earthscan).
Filtering is the process of reading a large body of research and concisely summarizing its relevant points. Because the scientific literature is so heterogeneous, its policy impact depends heavily on how the research is filtered.

Filtering may be biased toward certain types of conclusions. Food industry organizations hire scientists and public relations specialists to spread the good word about favorable studies, without mentioning unfavorable studies. The public relations specialists are evaluated according to their success in placing favorable stories in the mass media. Reporters do not purposely seek to serve as a vehicle for industry public relations, but they face intense pressure to generate buzz by reporting novel and surprising findings. Hence, even though the balance of evidence in the scientific literature changes only slowly, headlines each week tell the public that everything they previously believed about nutrition and health was a big fat lie.

To summarize a complex scientific literature with less bias, scientists prefer to rely on systematic evidence reviews. In a systematic evidence review, an inter-disciplinary team establishes a protocol, a document that describes in advance the procedure for selecting relevant research studies, reducing the temptation to concentrate on studies that are favorable to the team’s prior expectations. For each selected study, the team evaluates the strength of the evidence, again using criteria established in advance.

Systematic evidence reviews do have some limitations. While they can avoid errors that stem from selective reading of just favorable parts of the scientific literature, systematic evidence reviews cannot fix misinterpretations that are widespread in the literature. Also, such reviews may not reflect recent improvements in scientific research. Still, because of their transparency and replicability, systematic reviews can clarify the state of the evidence on contentious scientific issues.
If you are attending the Experimental Biology 2013 meetings this week in Boston, the book itself is on display today at the CRC Press booth (#531 in the exhibition hall).  Please stop by the booth, and please share your thoughts on whether food policy is a worthy topic of study at a meeting of scientists.

No comments: