Monthly Archives: June 2024

Shallow Science Reporting in The Atlantic

On June 3rd, Jonathan Lambert published an article in The Atlantic entitled “Psychedelics are Challenging the Scientific Gold Standard” (see here). The tagline was “How do you study mind-altering drugs when every clinical-trial participant knows they’re tripping?

I’ll first mention that articles relating to psychedelics are always attractive clickbait. That’s not necessarily bad. One might hope that such clickbait will attract readers enough to impart some more generalized science knowledge and insight.

But sadly this article instead spreads serious misinformation and creates harmful misconceptions. The other day my wife, who is an accomplished epidemiologist, shared her frustration over the many misinformed and misleading scientific arguments presented in this article.

I’ve already written quite a bit about the issue of terrible scientific reporting in this blog and in my book, Pandemic of Delusion (see here). So in this installment I’ll try to use this as a learning opportunity to share some more accurate scientific insight into clinical trials as well as to correct some of the misinformation presented in this article.

The author claims that the study of mind-altering drugs presents new challenges since participants can easily tell whether they are tripping or not. Being aware of which treatment you have received could result in a distortion or even an invalidation of the results.

But this is hardly a new or even remotely unique challenge. There are a wide range of non-hallucinogenic treatments that have side effects that are also easily apparent to the participants. In fact it is an extremely common situation for epidemiologists, one that they have dealt with successfully for many decades in any study where the treatment has noticeable side effects like nausea or lethargy.

The author then goes on to present this as a fundamental issue with Randomly Controlled Trials (RCT) as a clinical study design strategy. An RCT is a widely-accepted and well-proven practice of ensuring that participants are assigned to the different trial groups being tested and compared in a completely random manner. As Mr. Lambert correctly points out, RCT is the “Gold Standard” for clinical designs.

In his article, the author attempts to make a case that this “gold standard” is insufficient to meet the challenge of studies of this kind and that “We shouldn’t be afraid to question the gold standard.” This quote came from a source, but it is still being chosen and presented by the author to support his conclusions. I would be highly surprised if his source intended this comment to be interpreted as used in this paper. I know my wife is often incensed by the way that her interview comments were selectively used in articles to convey something very different that what she intended.

As an aside, I want to mention that generally when journalists interview scientists, they expressly refuse any offers to "fact check" their final article, citing "journalistic integrity."  I find this claim of journalistic integrity highly suspect, particularly when interviewers like Rachel Maddow commonly start by asking their guests "did I get all that right in my summary introduction?" This only improves, rather than compromises, their journalistic integrity and the accuracy of their reporting.

In any case, while every study presents unique challenges, none of these challenges undermine the basic validity of our gold standard.

But to support his assertion, the author incorrectly links RCT designs with “blinding.” He states that “Blinding, as this practice is called, is a key component of a randomized controlled trial.

For clarification, blinding is the practice of concealing treatment group assignments from the participants, and preferably also from the investigators as well (which is called double blinding), even after the treatment is administered.

But blinding is an entirely optional addition to an RCT study design. Blinding is not a required component of an RCT design, let alone a “key component” as the author asserts. Many valid RCT designs are not blinded, let alone double-blinded. For more details on this topic I point you to the seminal reference work by Schultz and Grimes published in 2002.1

The author makes similar mistakes by conflating RCT designs with placebo effects. To clarify any misconceptions he has created, many, many studies, including randomized trials, do not include a placebo group nor are they always necessary or sensible. In many typical cases, the study goal is to compare a new drug to a previous standard, and a placebo is not relevant. In other cases, the use of a placebo would be unethical, such as in trials of contraceptives.

Next the author advocates for new, alternative study designs like “open label trials” and “descriptive studies.” But neither of these designs are new nor are they in any way superior to randomized trials. In fact they are far inferior and introduce a host of biases that an RCT is designed to eliminate. They are alternatives, yes, but only when one cannot economically, technically, or ethically conduct a far more rigorous and controlled RCT study.

Non-randomized trials can also be used as easy “screening” studies to identify potential areas for more rigorous investigation. For example, non-randomized studies initially suggested that jogging after myocardial infarction could prevent further infarctions. Randomized studies proved this to be incorrect, probably due to other lifestyle choices made between those who choose to exercise and those who do not. But again, their findings should be taken as tentative until a proper RCT can be accomplished.

And there are many options that trained researchers can utilize to study hallucinogenic drugs, as they do with a wide range of detectable treatment scenarios, without compromising the sound basis of a good randomized trial design. As just one example, they could administer their control group with an alternative medication that would cause many of the same symptoms, even tripping! This is done fairly routinely in other similar situations.

There are many other criticisms one could and should make of this article, but I’ll wind down by saying that psychedelics are not “challenging the scientific gold standard.” We do not need to compromise the integrity of good scientific methods in order to study the efficacy of hallucinogens in treating PTSD or any other conditions.

And further, we should push back against this kind of very poor scientific reporting because it propagates misinformation that undermines good, sound, established scientific techniques. The Atlantic should hold their authors to a higher standard.

  1. Kenneth F. Schultz and David A. Grimes, “Blinding in randomized trials: hiding who got what,” THE LANCET • Vol 359 • February 23, 2002 ↩︎