
In my scientific evangelism, I often tout the virtues of good scientists. One that I often claim is that they do not accept easy answers to difficult problems. They would rather say “we do not have an answer to that question at this time” than accept some possibly incorrect or incomplete answer. They understand that to embrace such quick answers not only results in the widespread adoption of false conclusions but also inhibits the development of new techniques and methods to arrive at the fuller truth.
When it comes to clinical research however, many clinical researchers do not actually behave like good scientists. They behave more like nonscientific believers or advocates. This is particularly true with regard to the problem of “loss to follow-up.”
What is that? Well, many common clinical research studies, for example how well patients respond to a particular treatment, require that the patient be examined at some point after the treatment is administered, perhaps in a week, perhaps after several months have passed. Only through follow-up can we know how well that treatment has worked.
The universal problem however is that this normally requires considerable effort by the researchers as well as the patients. Researchers must successfully schedule a return visit and patients must actually answer their telephone when the researchers attempt to follow-up. This often does not happen. These patients are “lost to follow-up” and we have no data for them regarding the outcomes we are evaluating.
Unsurprisingly perhaps, these follow-up rates are often very poor. In some areas of clinical research, a 50% loss to follow-up rate is considered acceptable – largely based on practicality, not statistical accuracy. Some published studies report loss to follow-up rates as high as 75% or more – that is, they have only a 25% successful follow-up rate.
To put this in context, in their 2002 series on epidemiology published in The Lancet, Schultz and Grimes included a critical paper in which they assert that any loss to follow-up over 20% invalidates any general conclusions regarding most populations. In some cases, a 95% follow-up rate would be required in order to make legitimate general conclusions. The ideal follow-up rate required depends upon the rate of the event being studied.
Unfortunately, few studies involving voluntary follow-up by real people can achieve these statistically meaningful rates of follow-up and thus we should have appropriately moderated confidence in their results. At some threshold, a sufficiently low confidence means we should have no confidence.
So, given the practical difficulty of obtaining a statistically satisfactory loss to follow-up, what should clinical researchers do? Should they just stop doing research? There are many important questions that we need answers to, and this is simply the best we can do. Therefore, most conclude, surely some information is better than none.
But is it?
Certainly most clinical researchers – but not all – are careful to add a caveat to their conclusions. They responsibly structure their conclusions to say something like:
We found that 22% of patients experienced mild discomfort and there were no serious incidents reported. We point out that our 37% follow-up rate introduces some uncertainty in these findings.
This seems like a reasonable and sufficiently qualified conclusion. However, we know that despite the warning about loss to follow-up, the overall conclusion is that this procedure is relatively safe with only 22% of patients overall experiencing mild discomfort. That is almost assuredly going to be adopted as a general conclusion. Particularly so since the topic of the study is essentially “the safety of our new procedure.”
Adopting that level of safety as a general conclusion could be wildly misleading. It may be that 63% of patients failed to respond because they were killed by the procedure. Conversely, the results may create unwarranted concern about discomfort caused by the procedure since the only patients who felt compelled to follow-up were those who experienced discomfort. These are exaggerations to make the point, but they illustrate very real and very common problems that we cannot diagnose since the patients were lost to follow-up.
In any case, ignoring or minimizing or forgetting about loss to follow-up is only valid if the patients who followed-up were random. And that is rarely the case and certainly can never be assumed or even determined.
Look at it this way. Imagine a scientific paper entitled “The Birds of Tacoma.” In their methodology section, the researchers describe how they set up plates of worms and bowls of nectar in their living room and opened the windows. They then meticulously counted to birds that flew into the room to eat. They report they observed 6 robins and 4 hummingbirds. Therefore, they conclude, our study found that in Tacoma, we have 60% robins and 40% hummingbirds. Of course, being scrupulous researchers, they note that their research technique could, theoretically, have missed certain bird species.
This example isn’t exactly a problem of loss to follow-up, but the result is the same. You can of course think of many, many reasons why their observations may be misleading. But nevertheless, most people would form the long-term “knowledge” that Tacoma is populated by 60% robins and 40% hummingbirds. Some might take unfortunate actions under the assurance that no eagles were found in Tacoma. Further, the feeling that we now know the answer to this question would certainly inhibit further research and limit any funding into what seems to be a settled matter.
But, still, many scientists would say that they know all of this but we have to do what we can do. We have to move forward. Any knowledge, however imperfect is better than none. And what alternative do we have?
Well, one alternative is to reframe your research. Do not purport to report on “The Birds of Tacoma,” but rather report on “The Birds that Flew into Our Living Room.” That is, limit the scope of your title and conclusions so there is no inference that you are reporting on the entire population. Purporting to report general conclusions and then adding a caveat in the small print at the end should be unacceptable.
Further, publishers and peer reviewers should not publish papers that suggest general conclusions beyond the confidence limits of their loss to follow-up. They should require that the authors make the sort of changes I recommend above. And they themselves should be willing to publish papers that are not quite as definitive in their claims.
But more generally, clinical researchers, like any good scientists, should accept that they cannot <yet> answer some questions for which they cannot achieve a statistically sound loss to follow-up. Poor information can be worse than no information.
When <real> scientists are asked about the structure of a quark, they don’t perform some simple experiments that they are able to conduct with the old lab equipment at hand and report some results with disclaimers. They stand back. They say, “we cannot answer that question right now.” And they set about creating new equipment, new techniques, to allow them to study quarks more directly and precisely.
Clinical researchers should be expected to put in that same level of effort. Rather than continuing to do dubious and even counterproductive follow-up studies, buckle down, do the hard work, and develop techniques to acquire better data. It can’t be harder than coming up with gear to detect quarks.
“I have to deal with people” should not be a valid excuse for poor science. Real scientists don’t just accept easy answers because they’re easy. That’s what believers do. So step up clinical researchers, be scientists and be willing to say I don’t know but I’m going to develop new methods and approaches that will get us those answers. Answers that we can trust and act upon with confidence.
If you are not wiling to do that you are little better than Christian Scientists.