Science Abstracts Do Not Always Tell Whole Story

science abstracts

Movie trailers capture the viewers’ attention, giving a general idea of what the film is about without giving away the entire plot and features. They do not however, tell the entire story and may give a false impression to some film-goers. This can be said about science abstracts of research papers that summarize the “plot” of a study, but that do not always tell the whole story accurately.

A science abstract gives a summary of a study that provides clinicians and other readers a general idea of what the study is about. It helps readers decide whether they want to invest the time to read the entire research paper. Like a fiction story that presents the plot’s characters and setting, plot, climax, and resolution, an abstract provides an organ introduction of the study, the method of research, the results of the experiment, and the discussion and conclusion that describe what the researchers interpret of the results.

Like some movie trailers or Amazon book reviews, science abstracts do not always tell the whole story the way the actual paper presents. A French review of 144 studies on rheumatology that was published in Joint Bone Spine in May 2012 showed that almost 25 percent of them had misleading conclusions, especially those with negative findings. There was both a lack of reporting of primary outcomes and conflicting reports between the study results and the researchers’ conclusions.

Such errors among science abstracts are not new. One of the earliest critical reviews of abstracts was published in the Journal of the American Medical Association (JAMA) in 1999 in which 44 articles were randomly selected several times in five major medical journals, including JAMA, British Medical Journals, and New England Journal of Medicine, between July 1996 and August 1997. The authors of this review found that between 18 to 68 percent of the of selected science abstracts were inconsistent with the data, results, and conclusion, which was quite a surprise at the time since they were published in large-circulated medical journals.

Two more studies, one published in 2004 on pharmacology, and one published in 2012 on spinal manipulative therapy for low back pain, also showed inconsistencies and bias within a collection of randomized controlled trials. In the pharmacology study, almost 25 percent of 243 selected science abstracts had “omissions,” while one-third of the total abstracts had either an omission or inaccuracy. Overall, more than 60 percent of the abstracts were classified as “deficient.” In the manual therapy review, about 28 percent of the abstracts reviewed had a low risk of bias.

Of course, not all science abstracts are considered inaccurate. In the introduction, the authors should describe what is known and unknown about the specific topic and how this study bridges that gap. It should also contain the authors’ hypothesis. The method section describes the sample population and its size and how the experiment is set up. Higher quality studies should have a large sample size to better represent the general target population. For example, having 1,000 middle-age, type 2 diabetic women in a study would be a more accurate representation of that general population than having 10 or 100 of the same subjects.

The results section examines and analyzes what the researchers found, while the discussion and conclusion sections are where the authors state their findings, critically ask questions about their findings, the limitations of the study, and whether it supports the hypothesis or not. Authors should not cherry-pick their position from the results, which would make the study very biased.

Since science abstracts do not always tell the whole story of research articles, readers should remain cautious of what abstracts claim until they had read the entire research. Even systematic reviews- usually regarded as one of the highest level of evidence with a lower risk of bias than a single study or case study – can draw contradictory or biased conclusions. If the quality of evidence and data are garbage, then the results will most likely be garbage. Thus, it is better to “watch the full movie” than rely on a “movie trailer” to validate the quality and accuracy of the information.

By Nick Ng

Sources:

Body In Mind 1
JAMA
The Annals of Pharmacotherapy
Joint Bone Spine
International Journal of Osteopathic Medicine

Your Thoughts?