There is no general method for telling science and pseudoscience apart, but there is a method for detecting science-based bullshit. (Around 1.500 words, estimated reading time: 8 min.)
In science, there is no such thing as ‘naked facts’.
If you have read Part I, you already know that data is (always) corrected data. Part I introduced some basic philosophy of science, and why there is no such thing as a general method for telling science and pseudoscience apart.
Part I also outlined the philosophical analysis of bullshit and argued that most of the ‘science bros’ are in fact science bullshitters because they exploit some features of science that can baffle outsiders to advertise and sell their services.
In Part II (this post) I look into how scientists deal with internal conflict within their disciplines, with a specific focus on exercise science. And because the series is about science and bullshit there will be something about bullshitters as well in the end.
How science really works
The demarcation problem has no clear-cut solution, and neither does the underdetermination problem. (If you have no idea what I mean, go read Part I!)
A consequence of this is that science tends to work through consensus on what scientists in a field accept as good methodological practice or as reasonable hypotheses.
This consensus is not an entirely formal affair. There is no explicit agreement on what they should put in undergraduate textbooks, etc. But scientists cite each other, use each other’s textbooks, publish in peer-reviewed journals, etc., which amounts to implicit votes of confidence.
Scientists also tend to imitate each other. In particular, newcomers and risk-averse researchers (read: those who want to increase their chances of actually publishing something) tend to imitate their successful colleagues. Which are often the older, more established ones, who do not change their mind easily, etc.
Subsequently, the consensus is pretty stable and may actually take priority over methodological standards in a variety of cases. There are two obvious cases in which the resulting situation may be confusing to outsiders:
- when a fundamental hypothesis is accepted without support from data, and:
- when a fundamental hypothesis is not rejected in spite of being contradicted by data.
The best example of (1) and (2) that I know of is dual-process theory (formerly dual system theory) in cognitive psychology. Every serious researcher knows that the theory’s boat is leaking, but almost every article will include a paragraph stating that its conclusions are compatible with dual-process theory. The reasons abound, including a Nobel Memorial Prize, well received pop-sci expositions (like this one, by one of the Nobel recipient; or like that one about Sherlock Holmes, by a blogger, which I know for a fact to be bullshit because Sherlock Holmes does not think that way (evidence here, and here).
Now, I could go on with this for hours, but you’ll probably prefer something more closely related to exercise science, so here it is.
An example: Supercompensation Theory
Within sports science, an illustration of both (1) and (2) above is supercompensation theory. I won’t go into the details of the theory because you probably know it already under one name or another:
- If you have read Pr. Zatsiorsky’s textbook Science and Practice of Strength Training (published in 1995, 2nd ed. 2006) it’s in the first chapter under that name.
- You may also have heard about it through Juggernaut Training System’s book The Scientific Principles of Strength Training (or the YouTube series based on it), where it’s presented as the Stress-Recovery-Adaptation cycle (the 4th principle).
- Finally, you may know it under the official name of its parent model, the General Adaptation Syndrome, (GAS) especially if you have stumbled upon Natalia Verkhoshansky’s incredibly detailed presentation of the GAS and its problems.
If you are familiar with the supercompensation theory, the visual summary from Zatsiorsky & Kraemer’s Science and Practice of Strength Training, 2nd ed. will suffice for reminding purposes.
If you have not familiar with it, the figure is almost self-explanatory: without getting into the details “substance” here means “something that can be represented as going up and down on a curve”. The supercompensation theory stipulates that there is one substance that represents preparedness: the more you have, the best you can perform, but you spend it when you perform, so you have to replenish it.
If you need more information, you can speed-read the Wikipedia page and come back because you really don’t need to know more than what’s in it. Otherwise, read the first chapter of Zatisiorski & Kraemer. Now, here’s how Supercompensation Theory illustrates points (1) and (2) in the previous section:
- The theory was not proposed to explain the effect of training. It was initially proposed to explain the response of organisms to toxins and later adopted in sports science due to an analogy between training-induced stress and exposition to toxins — essentially an elaboration on “what does not kill us make us stronger”.
- The theory is known to be incorrect but remains a textbook staple. The theory predicts that overtraining happens gradually, and the prediction is invalidated by data (for evidence, check this paper, p.46, section Overtraining). The problem is not something that can easily be fixed.
So, why does supercompensation theory still enjoy popularity?
In a nutshell, because you can discount the issue with the incorrect prediction about overtraining in practice. If you happen to train endurance athletes who are at a risk of overtraining, you can still base the bulk of your programming on the supercompensation model. You’ll get results. And you’ll mitigate the risks by eyeballing recovery, provided that you don’t trust the model to tell you where to look for the warning signs.
In Science and Practice of Strength Training Pr. Zatsiorsky discusses an alternative to supercompensation (the ‘fatigue-fitness’ model) that he clearly prefers. But he goes to great lengths to explain that both are useful to program training. The longer story, which is not for textbooks, is that the fatigue-fitness model has not been formulated in response to data, but to:
(a) do better than the supercompensation model for predicting overtraining; and:
(b) avoid being contradicted by data already compatible with supercompensation.
In principle, (a) gives the fatigue-fitness model a better explanatory power relative to existing data, but pretty much the same predictive power because it does not predict radically different and spectacular effects. There is nothing as spectacular as, say, the General Relativity predicting that the Sun bends starlight twice as much as what Newtonian mechanics would predict, that made newspaper headlines in 1919 when Sir Arthur Eddington confirmed it.
So all in all, the methodological reasons to prefer the fatigue-fitness model are still pretty weak, and big shots in exercise science have not yet given up on the supercompensation model. That’s why supercompensation is still in the textbooks.
And it’s there to stay.
Zatsiorsky’s discussion of single-factor and dual-factor models is an example of within-textbook contradiction that may confuse the outsider.
Alongside this, one can stumble upon equally confusing between-textbook contradiction within the same field or neighboring ones. McGill’s Low Back Disorders contradicts elementary textbooks in anatomy, physical therapy and exercises science, on matters such as the function of certain muscles, proper selection of rehabilitation exercises, or the organization of training.
And then, there are study-to-textbook and study-to-study contradictions that occur because at least some of the scientific studies that have yet to make it in textbooks intend to question the consensus and eventually lead to revised textbooks.
All these contradictions are the by-product of science on which science bullshitters build their trade. Let’s turn to them for our conclusions.
As Part I showed, there is no such thing as a general method for telling science and pseudoscience apart from one another. And if it is in principle possible to massage data set so as to fit almost any theory, scientists tend to agree about how much massaging amounts to admissible corrections and how much amounts to fraud.
The system that enforces the agreement is not perfect. Every once in a while a fraudster manages to slip through the cracks. And sometimes, a scientific field questions the standards for reasons that depend on methodology, but also sociology and economics.
The bottom line is that science works on a different time scale than news outlets or social media and needs time to sort itself out. The consequence of the above is that the “cutting edge” of scientific research is pretty much useless to the layman: it has not stood the test of time (and replication).
Add to this that a lot of studies are aiming at incremental contributions — they do not intend to produce new results but rather to support conclusions that could have been deduced from our current theories but have not so far been directly tested — and you have one more reason to stick to textbooks.
As an illustration, take for example the Monthly Applications in Strength Sport (MASS), a commercial service offered by Stronger By Science which proposes a monthly digest of “the latest strength, hypertrophy, and nutrition science” by an Exercise Science M.A. candidate (Greg Nuckols), a Strength & Conditioning Ph.D. candidate (Eric Helms) [Edit (April 2018): Helms has now a Ph.D.] and an actual Ph.D. in Exercise Physiology (Michael Zourdos).
Now, check their promotional issue. None of the studies reviewed is breaking new ground. Some propose useful refinements to the existing methodology, but that’s about it.
Now, it’s a promotional first issue, so they probably did not put too much effort in it, and they most likely cherry-picked studies whose results would go their readership’s way, but still.
I don’t say that it’s the wrong way.
What I say is this: for a little under 3 month worth of MASS subscription, you can buy a hard copy of Prf. Zatsiorksy’s book. Granted, it’s 11 yrs old. But if they have not updated it, it might be for a reason. After all, dead-tree publishers need money as much as electronic content ones. But it’s harder for them to sell good stuff, and all the harder to sell bullshit (nudge-nudge, wink-wink!).
I don’t know about you, but I think I’ll stick to Zatsiorky until the next edition.