Americans Say Mainstream Media’s Coverage of Science Is Less Than Stellar
By Victoria Sheridan | Oct 11, 2017
A September Pew Research Center poll has reinvigorated the years-old debate over how mainstream news media outlets cover science.
Pew found that even though half of Americans believe that “specialty sources” like science magazines, documentaries, or science museums present scientific information correctly, few people actually rely on them for news about science—even though they believe that news outlets often fail to present scientific developments accurately.
Most people—54 percent—regularly get their information on hot-button policy issues like vaccination, stem cell research, GMOs, and climate change from newspapers, magazines, and other general news outlets that cover a wide variety of science issues and stories. But even though only 28 percent of those surveyed said these sources “get the facts right” most of the time, just 25 percent turn to science magazines, and only 12 percent rely on science museums for information.
Not surprisingly, Americans are not pleased with this situation. When asked whether the news media did a “good job” or a “bad job” of covering science, 57 percent responded that they did a good job, while 41 percent said they did a bad job. Pew also asked respondents if problems with science coverage rested with “the way reporters cover it” or “the way researchers publish it.” More than 70 percent of respondents identified reporters as more of a problem, while only 24 percent thought researchers were at fault. (Pew conducted the poll in late May and mid-June, surveying about 4,000 adults.)
Scientists and researchers also have complained about lackluster mainstream news stories. In 2015, Harvard University PhD student and researcher Samuel Mehr spoke out about how newspapers and magazines simplify scientific findings, in response to coverage of a study he had co-authored.
Mehr and his co-authors focused on how music classes could affect the cognitive abilities of children. In the study, one group of children received six weeks of music lessons while other groups of children participated in art projects or had no organized activities. When all the children took a cognitive test at the end of the six weeks, the children in the music classes performed no better than the other groups of children.
But Mehr acknowledged that there were a number of caveats that could explain why the music classes did not seem to have an effect on the children’s cognitive test scores. Time magazine’s report on the study, however, concluded that “music may not make you smarter.” Mehr pointed out that the actual study did not measure intelligence, and that the reporter had confused correlation with causation.
Nearly a decade ago, science writer Cristine Russell offered suggestions for journalists covering climate change. Russell explained that when new findings about climate change surface, “the subtleties of the science, and its uncertainty, might be missed by reporters unfamiliar with the territory.” Because the results of one study can be contradictory to other studies, Russell said that scientists look for consistent patterns throughout multiple studies before coming to a conclusion. “Journalists should avoid ‘yo-yo’ coverage with each new study and try to put the latest findings in context,” she said.
Today, the problem that Russell identified continues to seep into mainstream news reports. According to Pew, this “yo-yo” coverage was one of the bigger issues in science news that Americans identified: 43 percent of Americans believed that the media are “too quick to report findings that may not hold up.” Another 30 percent also said the news media oversimplify research findings.
In March, Alex Berezow of the American Council on Science and Health, along with editors from RealClearScience, ranked mainstream news outlets’ science coverage using two criteria: How much of the coverage was based on evidence, and how compelling the science stories were. The highest-ranked sources were niche publications like the weekly science journal Nature and Science Magazine. However, a few general news outlets like The Atlantic, Vox, and The Guardian ranked highly. (Two outlets that Berezow writes for—The Economist and the BBC—also did well in the rankings.)
However, NPR, The Washington Post, MSNBC, and CNN fared relatively poorly. Berezow was particularly critical of The New York Times, which he said frequently gives a platform to “fringe” doctors, “pseudoscientific claims,” and discredited studies.
Though Berezow’s rankings are based on only two criteria and include just a fraction of news sources, his findings also highlight some of the mistakes many journalists frequently make. The warning signs include: failing to explain a study’s methodology or using technical terms (which suggests that the writer didn’t understand the original report); failing to include any “limitations on the conclusions of research” and making sweeping conclusions based on a single study; or trying to connect the results of a single study to larger, unrelated issues.
Berezow concluded that inaccurate reporting not only mars the reputation of an individual reporter and his or her outlet, it damages the credibility of journalists everywhere.