ֱ

Want to Know If Someone Is Manipulating Data?

— Milton Packer describes how to distinguish science from magic

MedpageToday
image

Every magician is master of deception, and we adore being deceived. How do magicians accomplish their illusions? The key to every magic trick is misdirection. If you tell the members of the audience to look at A, then they will not look at B. And it is B that makes the trick work.

That is why many magicians forbid cell phones during their performances. If you can take a video of the trick and play it back repeatedly, you might eventually be able to find out how the trick works. You can keep looking for B, even though the performer is doing everything to make sure that you are focused on A.

Why am I talking about magicians in a blog devoted to medicine?

Two weeks ago, I wrote a post about my experiences as a principal investigator in large-scale clinical trials. Several readers thought that my personal experiences did not represent the norm. Many thought that clinical trial data are commonly manipulated in order to put them in the best possible light. I had to acknowledge that their concerns were valid.

A respected friend suggested that I devote a post to describing how someone might manipulate data in order to make a negative trial look like a positive one. My challenge: how could I possibly describe it in a blog?

Soon the answer became obvious. Deception of the audience in presenting a clinical trial is based on the same strategy of misdirection that magicians use to make their performances work.

Believe it or not, there are dozens of possible forms of misdirection that are possible when presenting the results of a clinical trial. They could fill an entire book. But today, I am going to mention the two most important ones, which any reader or listener can look for.

First and most important is the trick of missingness. The best way to make data look better is to take out data that you do not like or not bother to collect it at all. If the presentation does not account for missing data, all sorts of mischief are possible.

Let us say that you have randomized 600 patients in a trial. According to the intention-to-treat principle that governs the integrity of clinical trials, you need to show data on 600 patients. But often, investigators will show you data on 550 patients, having taken 50 patients out of the analysis.

Clinical investigators can provide all sorts of reasons why the 50 patients are missing. They can say that the patients never returned for follow-up, or that they violated the protocol and were removed from the analysis. Investigators can get very creative in devising reasons that seem credible but are biased. They can even claim that the missingness does not matter if it affects both treatment groups equally, even though that is certainly not true.

The truth: Missingness is never random, and if it is large enough, it is always a source of bias. Did the patient not return for a repeat evaluation because they died or suffered a serious adverse effect? The investigator might not even know. The integrity of a clinical trial depends on the ability of an investigator to fully describe and account for all missing data. A strong investigator worries about missing data; a careless investigator ignores the problem.

When is missingness important? When the amount of missing data is a meaningful proportion of the size of the treatment effect. Example: if the treatment group had 25 fewer deaths than the control group, missing data in 15 patients is meaningful. If the treatment group had 200 fewer deaths than the control group, missing data in five patients is very unlikely to be relevant.

Second is the trick of not showing a planned analysis, or alternatively, showing an analysis that was not planned. In every clinical trial, the rules governing data analysis are written down in advance in a protocol and a formal statistical plan. These documents provide evidentiary proof that the investigators are planning to look at a very specific endpoint that is defined in a very specific way and analyzed in a very specific manner in a very specific sequence. These rules are defined before anyone has a chance to look at the data.

How do you know if the investigators followed their prespecified rules? You need to read the protocol and the statistical plan. And if you can, you need to look at the dates that these documents were with regulatory agencies.

These documents might reveal that the investigators defined four endpoints in a very specific manner, and that they intended to analyze them in the following sequence: A, D, C, B.

So would you worry if the investigators only presented the results of A and C? Would you worry if they changed the definition of A after the fact? Would you worry if they analyzed C in a way that was not planned? And would you worry if the presenter told you to focus your attention on a new endpoint -- let us called it E -- which was never planned in advance at all?

You should worry under all these circumstances.

How can you tell if the investigators followed their plan faithfully? A few top-tier journals require that the investigators provide files of their protocol and a statistical plan at the time of initial peer-review, and they are published as online supplements to the paper reporting the main study results. Sadly, most journals do not have this requirement. And even when these documents are published, most readers do not bother to look at them.

There are four important things to remember about these documents.

  • Investigators know that these documents will be closely scrutinized. Therefore, some might be tempted to specify an improper analysis in advance. Specifying something stupid in the statistical plan does not make it valid.
  • Investigators should summarize the essence of these documents on a slide shown at the time of their presentation at a scientific meeting. It is one of their most important slides, but it is also the one that most people in the audience ignore. And all too often, it is missing entirely.
  • If the drug or device is approved, the FDA is required to make its analyses available to the public. Therefore, it is possible to compare the analyses in a publication with the analyses performed by the FDA. For all prespecified analyses, these should look very similar to each other. The FDA analyses are particularly easy to access if the drug or device has been considered at a public advisory committee, since they are posted simultaneously on the FDA website.
  • The statistical plan focuses only on the analyses that are relevant to demonstrating the intervention's efficacy for a specific indication. Many secondary papers from a clinical trial describe analyses intended to learn about other effects of the intervention or about the disease itself. These analyses are not part of the regulatory approval process, and their findings should always be considered in the context of the totality of evidence in the medical literature. Some are hypothesis-generating; some confirm similar observations in other trials.

So in a nutshell, here are two simple rules.

First, are there missing data and is the degree of missingness meaningful?

Second, did the authors specify a valid analysis plan in advance and did they follow it?

If these two simple rules are not followed, you should wonder and worry. Does the presenter want me to focus on A, when I should be looking at B? If the presenters want to misdirect the audience, it is a really simple thing to do -- especially in a presentation that lasts for only 10-15 minutes.

To be clear, these are not the only two tricks that people can play with the data from a clinical trial. But they cover a lot of ground.

Here is my most important point of all. Investigators who engage in misdirection may not actually be consciously trying to mislead people. Amazingly, they are often the ones who are being misled. All too often, investigators are susceptible to self-deception -- especially if they do not know the rules of proper analysis and are inclined to find a way to show that the intervention works (even if it does not).

Misdirection is essential to the success of magicians. When done with perfection, it is a delight. The audience truly enjoys being fooled.

But when we are listening to the primary results of a trial or reading the publication of these results in a journal, we are not interested in entertainment or wishful thinking. We are interested in unbiased data and analyses. This is what makes science different from magic.

Disclosures

Packer recently consulted for Actavis, Akcea, Amgen, AstraZeneca, Boehringer Ingelheim, Cardiorentis, Daiichi Sankyo, Gilead, J&J, Novo Nordisk, Pfizer, Sanofi, Synthetic Biologics, and Takeda. He chairs the EMPEROR Executive Committee for trials of empagliflozin for the treatment of heart failure. He was previously the co-PI of the PARADIGM-HF trial and serves on the Steering Committee of the PARAGON-HF trial, but has no financial relationship with Novartis.