Journalists are getting unsatisfactory grades on reports about health care according to a research review published recently in JAMA Internal Medicine. The review gives journalists an “unsatisfactory” grade on reporting health studies on the basis of reviews of almost 1,900 stories about new medical treatments, products, tests, and procedures. Out of ten criteria that the reviewers, who were mostly physicians, were looking for in the health studies, reporters got failing grades on five, including cost comparisons, harmful side effects, questionable benefits, quality of the evidence, and comparison of a new method with alternatives.
Relative risk versus absolute risk – The media tend to exaggerate the benefits of an intervention by reporting relative risks instead of absolute risks. According to Cancer Research UK, a risk is the chance or probability that something can happen. Relative risk states how much more or less likely an event is in one group compared to another. For example, a relative risk headline would say, “Women who eat 1.5 cups of cruciferous vegetables a day has significantly less inflammatory substances in their bloodstream that cause cancer than women who eat half a cup a day.”
However, relative risk does not say anything about the “overall likelihood of any of these things happening at all.” This is called absolute risk. The higher it is, the more likely something will happen, although there is no guarantee that it will or will not happen. A sample absolute risk headline would say, “Two out of ten men will develop a certain type of disease by age 60.”
While reporting the relative risk may be more appealing to the public than absolute risk, the latter is necessary to provide more accuracy than simply giving a “positive light” to a story. Journalists who give an absolute risk viewpoint increases the credibility of the health story.
Limitations of studies – Most journalists have a tendency to assume that the researchers’ conclusion is their final answer in most health studies. Schwitzer wrote that some of the stories “fail to differentiate association from causation.” He noted that there were more than a dozen news stories that misinterpreted the observational studies involving coffee between May 2010 to August 2013. Researchers in the coffee studies had not established a cause-and-effect relationship, yet journalists reported otherwise. News reports praised coffee for lowering the risk of stroke, uterine cancer, and colon cancer, while other reporters were writing that coffee can even “kill you.” Journalists must pay attention and consider the limitations of the research because omitting them leaves their audiences with misleading impressions about the values of the treatments under discussion, especially when there are no clear cause and effect.
Using and relying on one source – Schwitzer stated that half of the stories reviewed relied on one source or “failed to disclose the conflicts of interest of sources.” Relying on one source narrows the story or issue addressed, which can mislead the public about the bigger and more complex issues of health, such as the relationship between autism and vaccination, diabetes and exercise, and back pain and different treatments. Schwitzer used an example from ABC’s Good Morning America segment that asked if obesity could be cured with a pill. The reporter interviewed a physician who was the pill manufacturer’s consultant because he was “the right man to talk to.” The reviewers found that eight percent of the stories used only one source in their writings.
To reduce bias, journalists should use more than one source of information, especially if there is conflicting data available. For example, one researcher may claim that a high saturated-fat diet may not increase the risk of heart disease, yet another skeptical researcher may ask for evidence to back the other researcher’s claim.
Fuzzy warm reports to appease – People tend to like to be liked and lauded by others. This may explain why health business stories tend read more like a MLM pep rally than a balanced representation of what medical or healthcare company’s new findings mean for patients. For example, in 2007, the Chicago Tribune reported a story of a company’s positive outcome of a pneumonia treatment based on the company’s president statements, yet the story did not “seek independent comment” or provided independent analysis. It did not provide any information about whether the trial was peer reviewed or not. Schwitzer recommends journalists to be skeptical when interviewing or getting information from a company representative. This reflects on the previous section on getting more than one source of more credible reporting.
Fawning over new technology – Fitness apps and gadgets tend to dominate fitness news, yet they provide almost nothing about improving lifestyle or fitness outcomes. Likewise, new medical technologies, such as robotic surgical systems and proton beam radiation machines, can increase medical costs but may not improve care, wrote Schwitzer. Journalists who write about medical or fitness technology should also ask the question, “Is it worth it?” or “Is it necessary?”
While JAMA gives journalists an unsatisfactory review, reporters can improve their health studies reporting by doing a little more research and remove their bias in their reporting. Sticking with the facts and avoiding cherry-picking information can minimize the risk of misinforming the public.
By Nick Ng