This article is a chapter from my newest book, Fitness Science Explained, which is live now in our store.
If you want a crash course in reading, understanding, and applying scientific research to optimize your health, fitness, and lifestyle, this book is for you.
Also, to celebrate this joyous occasion, I’m giving away $1,500 in Legion gift cards! Click here to learn how to win.
***
“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”
—ISAAC ASIMOV
One month, media headlines blazon that research has confirmed that one food or another reduces your risk of cancer, diabetes, obesity, or some other nasty health condition.
Hooray, you think, time to load up on said food!
Then, some time later, after it has become a staple in your meal plans, the other shoe drops: new research refutes earlier findings and demonstrates that it actually increases your risk of disease and dysfunction.
What the heck?
How can scientific research just turn on a dime like that and do a full 180?
Oh well, you think, a few months of eating this way can’t have been that harmful. Life goes on.
Then it happens again. And again. And again. Eventually, you conclude that science can’t seem to make up its mind on anything and you stop paying attention.
Fortunately, this isn’t true.
It may appear that there’s a study to “prove” or “disprove” just about any assertion, but this illusion isn’t the fault of science itself, but rather the widespread misunderstandings about the scientific process, media sensationalism, and sometimes even fraudulent research.
Let’s take a closer look at the nine main reasons that science can appear to be so confusing and contradictory.
1. Media Misrepresentation
Attention spans are shorter than ever these days, and when news organizations have just a few hundred words or seconds to report on health matters, they can’t afford to discuss the nuances of complicated scientific research.
Instead, they need titillating headlines and easily digested soundbites that draw eyeballs and clicks, and bounce around in social media and water cooler conversations. That inevitably leads to misinformation.
The two most common ways this occurs are:
- Confusing correlation with causation.
- Oversimplification and sensationalism.
Let’s go over each.
Confusing Correlation with Causation
Quite a bit of health-related research is based on observational data, meaning that scientists observe groups of people going about their lives, collect various types of data, and then look for correlations between different variables. (A correlation is a mutual relationship or connection between two or more things.)
For example, it was through observational research that the link between smoking and lung cancer was first discovered.
In the famous British Doctors Study of 1954, scientists sent out questionnaires to British doctors asking them about their smoking habits. The scientists then looked at which doctors got lung cancer and found that doctors who reported smoking were more likely to get the disease.
This type of research is a fantastic tool for documenting phenomena, forming hypotheses, and pointing the way for further research, but it can never be used to conclusively determine the cause of the phenomena observed because there are many ways for variables to be related without one causatively influencing the other.
For instance, ice cream intake goes up in the summer, as does the incidence of drowning. So, you could say that there’s a strong correlation between eating ice cream and drowning.
This does not mean that eating ice cream causes people to drown, however, which is how your average media outlet might explain it.
A good example of this is how the media has reported that drinking diet soda can make you fat. Cause and effect, cut-and-dried.
These warnings were based on research that showed that people who drank diet soda more often also tended to be more overweight, which may or may not be true.
What if diet soda isn’t causing weight gain, but instead, obese people tend to switch to diet soda in hopes of losing weight?
That is just one of a number of alternative hypotheses that could explain the correlation, and that’s why further, more rigorous research is needed to identify the true cause.
Well, in this case, that additional research has already been done, and scientists found that the correlation between obese people and drinking diet soda was, in fact, due to their efforts to lose weight.
In other words, diet soda was more popular among overweight people trying to lose weight because it contains fewer calories than regular soda. Furthermore, when it’s used in this fashion (to reduce overall calorie intake), diet soda consumption is associated with weight loss, not gain.
Unfortunately, the media makes this type of mistake all the time.
Studies show that news outlets tend to report on observational research more than randomized controlled trials, which can establish correlation (and which you’ll learn more about soon), as well as lower-quality studies that should be taken with a grain of salt.
Oversimplification and Sensationalism
The media will often oversimplify or distort the results of a study to make a nice, catchy, “clickbait” headline. Tim Caulfield of the University of Alberta has coined a term for this: scienceploitation.
For example, a popular UK website once ran the headline, “A glass of red wine is the equivalent to an hour at the gym, says new study,” with a sub-headline of “New research reveals skipping the gym in favor of the pub is ok.”
Perfect, many people thought, time to exercise less and drink more!
If you actually read the scientific paper, though, you’ll quickly realize that isn’t what the study found.
Instead, it found that a compound in grapes and red wine called resveratrol may increase exercise capacity (how well people or animals tolerate intense exercise) in rats who are already exercising. There was no red wine involved in this study, and it never showed that people should stop working out.
Another example of this is when the media reported on a study from the New England Journal of Medicine with headlines claiming that drinking coffee could make you live longer.
However, not only did the media make the mistake of confusing correlation and causation, they also failed to mention that the study only involved people who had already lived to at least 50 years of age, had no history of cancer, heart disease, or stroke, and didn’t smoke. There were many other limitations to the study as well, which the scientists mentioned in the paper, but the media failed to report on.
Why Can the Media Get Away With This?
There are likely three reasons why this type of reporting continues unabated:
- Journalists often have little formal training in science, and thus are unable to ensure their stories are scientifically accurate.
- The general public also has little formal training in science, and is thus unable to differentiate good from bad reporting.
- Sensationalism sells so there’s always an incentive for the media to spin scientific research in sensationalistic ways.
Keep in mind that most of these organizations rely on advertising revenue to survive, and advertising revenue is driven by website visits. Thus, from a business perspective, writing articles that get a lot of clicks is far more important than being scientifically accurate (especially when it would reduce the virality of the content).
2. Cherry Picking Versus Going by the Weight of the Evidence
It’s very common to have dozens or even hundreds of published studies on any given topic, and in many cases, the results aren’t all in agreement.
Sometimes the differing or even contradictory results come from differences in how the studies were designed and executed, sometimes shenanigans are afoot, and sometimes it’s just random chance.
This is why scientists consider the weight of the evidence available as opposed to the findings of a single study.
Think of a scale, with one group of studies more or less in agreement on one side, and another group that indicates otherwise on the other side. The scale will favor whichever side has more evidence to support its assertion, which you could say is where the “weight of the evidence lies.”
Thus, a good scientist will say, “Given the weight of the evidence, this explanation is most likely true.”
Unfortunately, due mainly to ignorance, personal biases, and the media’s love of controversy, research is often “cherry picked” to make claims that go against the weight of the evidence.
In other words, people often pick out and play up studies that they don’t realize are flawed, that they just personally agree with, or that will make for great headlines.
A perfect example of “cherry picking” occurs among some of the more zealous advocates of low-carb dieting.
They often hold up a few studies as definitive proof that low-carb diets are better for losing fat, and claim there’s no room left for discussion or debate.
When you peruse these studies, though, you’ll find glaring flaws in how they were carried out, and when you collect and analyze all of the available research on the matter, you’ll find there is no practical difference in fat loss between low- and high-carb diets so long as calories and protein intake are matched.
In other words, so long as people are eating the same amount of calories and protein, the amount of carbohydrate they’re eating won’t meaningfully impact their fat loss. In the final analysis, dietary adherence, not carb intake, is the biggest predictor of weight loss success.
Thus, a scientist worth their salt would say, the weight of the evidence indicates that there are no differences in fat loss between low- and high-carb diets so long as calories are restricted and protein intake is adequate. Accordingly, individuals should choose the diet that they can best stick to for maximum results.
(Yup, the old adage is true: In many ways, the best weight loss diet is the one you can stick to. And if you’d like specific advice about which diet you should follow to reach your fitness goals, take the Legion Diet Quiz.)
3. Different Quality Levels of Studies
As I mentioned above, there are often a large number of studies published on a particular topic, and some are better than others.
There are many factors to consider when assessing the quality of a study, ranging from the type of research (observational or otherwise) it is to how well it’s designed, how many participants there were, whether humans or animals were involved, and more.
Thus, when you’re working to determine the weight of the evidence, you have to consider not only the number of studies on each side, but the quality as well.
For example, if I have ten studies with only ten subjects each that points to one conclusion as well as two studies with 1,000 subjects each that points to another conclusion, then the weight of the evidence lies with the latter, even though the former conclusion has more individual studies on its side.
(As you’ll learn later in Fitness Science Explained, sample size, which is the number of samples measured or observations used in a study, is a major determinant of the quality of research.)
A perfect example of how ignoring the quality of research can result in misleading conclusions is antioxidant supplementation.
There’s low-quality evidence in the form of observational research and small-scale trials on animals and humans that suggests antioxidant supplementation may reduce the risk of cancer, and high-quality research in the form of randomized clinical trials that shows antioxidant supplementation doesn’t.
Guess which research the media and mainstream health “gurus” decided to champion? Yep, the low-quality research, and antioxidant supplements started flying off the shelves.
4. Science Moves Slowly
Contradictions are a natural part of the scientific process.
Many conclusions in science are tentative because they’re based on the best evidence available at the time.
However, as time moves on, and as scientists accumulate more data and evidence, newer findings and understandings can overturn older ones. This is particularly true when there’s little data and evidence to begin with.
A good example of this process is the story of butter versus margarine.
Three decades ago, as evidence accumulated that the saturated fat in butter may be related to heart disease risk, scientists recommended that people switch to margarine to reduce their saturated fat intake.
However, evidence then began to accumulate that the chemically modified fats (trans-fat) in margarine were even worse than saturated fat in regard to heart disease risk.
Based on this newer evidence, scientists revised their recommendations to continue to limit butter, but also eliminate margarine and trans fats from diets.
5. Science Often Deals in Shades of Grey Rather Than Black and White
Science is full of nuance and therefore research usually doesn’t lend itself well to headlines and soundbites, which is what most people want—simple, neat, black-or-white answers to their questions.
Unfortunately, though, many scientific topics operate more in shades of grey, and especially when the evidence isn’t strong. There is often a lot of uncertainty in the realm of science, which the general public finds uncomfortable.
They don’t want “informed guesses,” they want certainties that make their lives easier, and science is often unequipped to meet these demands. Moreover, the human body is fantastically complex, and some scientific answers can never be provided in black-or-white terms.
All this is why the media tends to oversimplify scientific research when presenting it to the public. In their eyes, they’re just “giving people what they want” as opposed to offering more accurate but complex information that very few people will read or understand.
A perfect example of this is how people want definitive answers as to which foods are “good” and “bad.”
Scientifically speaking, there are no “good” and “bad” foods; rather, food quality exists on a continuum, meaning that some foods are better than others when it comes to general health and well-being.
Take sugar, a molecule that most people consider “bad.”
In and of itself, it’s not a harmful substance, and one of its components is necessary for life (glucose). Research shows that when it’s consumed in moderation as part of a calorie controlled diet, it doesn’t cause adverse health effects or fat gain.
However, when sugar is added to highly processed foods to enhance their palatability and energy density, these foods become easier to overeat, and the resulting increase in calorie consumption and fat gain can become a health issue.
That doesn’t make for a good tweet or “elevator pitch” to a book publisher, though, and so the research on sugar tends to be separated into two buckets: one that shows it’s “good” and another that shows it’s “bad.”
This creates the illusion of incongruity, when in fact, it’s just a case of missing the forest for the trees.
6. Lack of Reproducibility/Replication
A very important concept in the realm of science is replication, or reproducibility.
For a scientific finding to be considered true, it needs to be reproduced, meaning that other scientists should be able to achieve the same results by repeating the experiment. This is important, because if other scientists can’t replicate the results, then it’s likely the initial results were a fluke.
The media loves to report on “hot” new studies with new findings, but often such studies are small “pilot” experiments that have yet to be reproduced with larger sample sizes and better study designs.
Often, later studies end up refuting the results of the original “breakthrough” research, giving the appearance of conflicting evidence. In reality, the initial results were happenstance. This is why it’s important to be cautious when viewing small studies with new or unusual findings.
One of the greatest examples of this happened in the 1980’s.
Two scientists held a press conference, saying they’d been able to make atoms fuse at room temperature (cold fusion). However, they hadn’t reproduced their results, and other scientists weren’t able to reproduce the results either. By late 1989, most scientists considered the prospect of cold fusion dead.
It’s also important to take note of the labs conducting research.
If one lab consistently produces a certain result, but other labs can’t reproduce it, then the research coming from the former lab should be viewed with skepticism.
For example, one lab has produced astounding muscle-building results with the supplement HMB, but other labs haven’t been able to reproduce anything close, which calls the positive results into question.
7. Poor Research Design/Execution
Sometimes a study produces unusual results simply because it’s poorly designed and executed.
A perfect example of this is research out of the Ramazzini Institute in Italy that supposedly showed that aspartame caused cancer in rats.
The research was heavily criticized by leading scientific organizations for having many flaws, including the fact that the control rodents had unusually high cancer rates, and the fact that when independent scientists asked to double check the data, the Institute flat out refused.
In most cases, organizations like this are outed among the scientific community, but by that time the story has already made its way through the media cycle, convincing many that once again, the scientific process doesn’t make any sense.
8. Unpublished Research
When scientists do a study, they collect the data, analyze it, write up the results, and submit the write-up to a scientific journal for publication.
The study then goes through a process of peer-review, which consists of other independent scientists reviewing it for flaws. Based on their findings, the study is either accepted for publication or rejected.
The peer-review process isn’t without flaws, but it’s the first line of defense against bad research getting published and then propagated by the media. Thanks to peer-review, if a study is published in a scientific journal, you can at least know it’s gone through some type of quality control.
This isn’t the case with unpublished research.
For example, scientists often present new research at conferences that has yet to be peer-reviewed or published. Sometimes the media catches wind of these reports and runs with them before they’ve gone through the peer-review process, and sometimes scientists will themselves promote the findings of studies that haven’t been peer-reviewed or published.
One case of this was on September 30th, 2011, when Martin Lindstrom reported on his unpublished neuroimaging iPhone study in the New York Times.
He reported that people experienced the same feelings of love in response to their iPhones ringing as they did in the company of their partners, best friends, or parents. Many scientists criticized Lindstrom, stating that his data didn’t support such a conclusion. But, since Lindstrom had bypassed peer review, his dubious conclusions were all that most people ever heard or saw.
Companies that sell products often report unpublished research as authoritative proof of their effectiveness.
You should be wary of such research, because it hasn’t been scrutinized by independent scientists, and is often designed and executed in such a way as to guarantee positive results.
For example, the creator of a cold exposure vest claimed that his product could help people burn up to 500 extra calories per day. This startling promise was based on research he conducted himself where people wore the vest for 2 weeks and lost fat.
This trial was never peer-reviewed or published in any scientific journal, and if it had been submitted for review, it would have been rejected for egregious design flaws.
For instance, the alleged increase in energy expenditure was based on unreliable estimates of body composition rather than direct, validated measurements of energy expenditure.
9. Fabricated Research
Fabricated research essentially means research that’s been made up.
While fabricated research isn’t nearly as common as everything else we’ve covered so far, it still exists, and can lead to great confusion.
Scientists may falsify data for a number of reasons, including to gain money, notoriety, and funding for further research, or merely to add another publication to their name.
One of the most famous cases of fabricated research came from Andrew Wakefield. In 1988, he published a paper in the prestigious journal Lancet that showed an association between the Measles/Mumps/Rubella (MMR) vaccine and autism in children.
However, it was later discovered that he had fabricated some of his data; independent researchers discovered that Wakefield’s descriptions of the children’s medical cases differed from their actual medical records.
Wakefield’s paper was eventually retracted from the journal, but to this day, his fraudulent research is still used to support the claim that vaccines may cause autism, despite numerous studies showing no such relationship.
#
Scientific research can seem like a quagmire of misinformation, contradiction, and outright lies.
When you look under the hood, though, you quickly find that the media selectively picks studies designed to generate the most controversy, spins the findings for maximum dramatic effect, and withholds information about how they were conducted.
In other cases, the shenanigans start before the studies hit your Facebook feed.
Poor study designs skew the results and some scientists accidentally or intentionally falsify their data.
Despite all of that, it’s still the best system we have for answering this simple question: What’s probably true, and what isn’t?
To understand how honest, intelligent researchers go about answering that question, we need to take a closer look at the scientific method.
***
This article is a chapter from my newest book, Fitness Science Explained, which is live now in our store.
If you want a crash course in reading, understanding, and applying scientific research to optimize your health, fitness, and lifestyle, this book is for you.
Also, to celebrate this joyous occasion, I’m giving away $1,500 in Legion gift cards! Click here to learn how to win.
Scientific References +
- Wakefield, A. J., Murch, S. H., Anthony, A., Linnell, J., Casson, D. M., Malik, M., Berelowitz, M., Dhillon, A. P., Thomson, M. A., Harvey, P., Valentine, A., Davies, S. E., & Walker-Smith, J. A. (1998). Retracted: Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. Lancet, 351(9103), 637–641. https://doi.org/10.1016/S0140-6736(97)11096-0
- Soffritti, M., Belpoggi, F., Degli Esposti, D., & Lambertini, L. (2005). Original studies/Studi originali General topics/Argomenti generali Aspartame induces lymphomas and leukaemias in rats a L’aspartame induce linfomi e leucemie nei ratti. In J. Oncol (Vol. 10, Issue 2). moz-extension://5f5f5732-4600-4b08-bfad-8e9e2515d7ac/enhanced-reader.html?openApp&pdf=https%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.482.6743%26rep%3Drep1%26type%3Dpdf
- Durkalec-Michalski, K., & Jeszka, J. (2016). The effect of β-hydroxy-β-methylbutyrate on aerobic capacity and body composition in trained athletes. Journal of Strength and Conditioning Research, 30(9), 2617–2626. https://doi.org/10.1519/JSC.0000000000001361
- Lowery, R. P., Joy, J. M., Rathmacher, J. A., Baier, S. M., Fuller, J. C., Shelley, M. C., Jäger, R., Purpura, M., Wilson, S. M. C., & Wilson, J. M. (2016). Interaction of beta-hydroxy-beta-methylbutyrate free acid and adenosine triphosphate on muscle mass, strength, and power in resistance trained individuals. Journal of Strength and Conditioning Research, 30(7), 1843–1854. https://doi.org/10.1519/JSC.0000000000000482
- Rogers, P. J., & Brunstrom, J. M. (2016). Appetite and energy balancing. In Physiology and Behavior (Vol. 164, Issue Pt B, pp. 465–471). Elsevier Inc. https://doi.org/10.1016/j.physbeh.2016.03.038
- Morenga, L. Te, Mallard, S., & Mann, J. (2013). Dietary sugars and body weight: Systematic review and meta-analyses of randomised controlled trials and cohort studies. BMJ (Online), 345(7891). https://doi.org/10.1136/bmj.e7492
- Freedman, N. D., Park, Y., Abnet, C. C., Hollenbeck, A. R., & Sinha, R. (2012). Association of Coffee Drinking with Total and Cause-Specific Mortality. New England Journal of Medicine, 366(20), 1891–1904. https://doi.org/10.1056/NEJMoa1112010
- Dolinsky, V. W., Jones, K. E., Sidhu, R. S., Haykowsky, M., Czubryt, M. P., Gordon, T., & Dyck, J. R. B. (2012). Improvements in skeletal muscle strength and cardiac function induced by resveratrol during exercise training contribute to enhanced exercise performance in rats. Journal of Physiology, 590(11), 2783–2799. https://doi.org/10.1113/jphysiol.2012.230490
- Selvaraj, S., Borkar, D. S., & Prasad, V. (2014). Media coverage of medical journals: Do the best articles make the news? PLoS ONE, 9(1). https://doi.org/10.1371/journal.pone.0085355
- Miller, P. E., & Perez, V. (2014). Low-calorie sweeteners and body weight and composition: A meta-analysis of randomized controlled trials and prospective cohort studies. American Journal of Clinical Nutrition, 100(3), 765–777. https://doi.org/10.3945/ajcn.113.082826
- Saturday June, L., & Richard Doll, B. (1954). BRITISH MEDICAL JOURNAL THE MORTALITY OF DOCTORS IN RELATION TO THEIR SMOOKI1NG HABITS A PRELIMINARY REPORT made of the smoking habits of patients with and without lung cancer (Doll and Hill. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2085438/pdf/brmedj03396-0011.pdf
- Greenwood, D. C., Threapleton, D. E., Evans, C. E. L., Cleghorn, C. L., Nykjaer, C., Woodhead, C., & Burley, V. J. (2014). Association between sugar-sweetened and artificially sweetened soft drinks and type 2 diabetes: Systematic review and dose-response meta-analysis of prospective studies. In British Journal of Nutrition (Vol. 112, Issue 5, pp. 725–734). Cambridge University Press. https://doi.org/10.1017/S0007114514001329