Dealing With Conflicting Evidence

 

Introduction

What do we do when evidence from scientific studies appears to be contradictory or in conflict? This is a very common occurrence and is fueled by the intense scrutiny given to health issues in the media. We are all aware of the problems created when new evidence contradicts previously held opinions. The ongoing debates around which foodstuffs are healthy (eggs are in/eggs are out, Is soy healthful or harmful?, etc.) or what environmental exposures cause cancer are but a few examples. The media plays a very prominent role in shaping our understanding of health matters. Media venues are variably responsible and accurate in their portrayal of medical science. If you do not believe this, we recommend attention to the magazine rack the next time you are checking out your groceries! In this section we will discuss the reasons for this phenomenon and propose a set of steps to clarify the situation.

     One important consideration to bear in mind is that research evidence is not equivalent with truth. The findings of a well-designed empirical study may well be true, but it is not necessarily the case. Often research evidence is contradictory and incomplete. This is part of the scientific process and is in no way a fault.

     The ultimate structure of medical evidence is fallibilistic. Fallibilism is the theory that most clearly describes the nature of evidence in health care. Fallibilism holds that any of our opinions or beliefs about the external world may turn out to be false, and that a large cloud of uncertainty shadows our deliberations. Rather than having access to medical certainties, we must rely on probabilities and so must always leave room for the ineradicable role of error and play of chance.

     The incomplete and constrained properties of evidence show that medical evidence is underdetermined. Underdetermination holds that mutually incompatible, but yet internally consistent explanations can be provided for the same evidence. Medical evidence is also, unfortunately, incomplete and uncertainty will always surround many of the critical issues for which we require answers. Further research helps, but does not eliminate the problem. It is intrinsic to the process. For many of the ills that plague us we have inadequate therapy, no reliable means of early detection and less than optimal diagnostic technology. Despite media claims of medical miracles, there is much work to be done. That is why further research is required, and we need methods to determine the soundness and applicability of the research to our health problems.

     Problems arise because people have an innate interest and concern with health. People who are not ill wish to prevent the onset of illness and those that suffer from some affliction wish to have the best treatment. Hence they turn to the health care sector for advice on how best to maximize health and minimize illness. We may wish to reflect on why there is such a strong will to believe in the power of certain technologies for which, on critical scrutiny, there is little evidence to support.

Back to Top

Which Backing to Believe?

Reasoning through the issues involving conflicts of evidence is challenging. The answers that we seek may not always be available. In general, one can look to the evidence hierarchy for guidance in weighing factual claims. This section is devoted to explaining how the evidence hierarchy can assist in the evaluation of evidence conflicts.

     In the section on Toulmin diagrams, we showed how research studies establish warrants, that is, they provide a justification for the warrant. In general, they are factual in nature. If we recall the examples given about esophageal cancer and high blood pressure, research evidence entered into consideration when factual information was required. In many contexts, it is simply stated that "studies have shown x or y." An important outcome of this course is the realization that all studies are not created equally and that conclusions drawn from better study designs will trump those of weaker designs. However, it is important not to be dogmatic about this. The best study to back a warrant is the one that most closely matches the claim being made. The deliberation of the relative merits of different study designs takes place within the context of rebuttals.

The following questions should be asked:

  • What is the current state of knowledge on this topic? Remember that evidence is often incomplete.
  • Is reliable information available from the research literature? Is so, what kinds of studies are there?
  • How extensive is the literature?
  • Is there a critical appraisal guideline?
  • How much of the argument depends upon factual claims (what is the case) and how much upon normative claims (what one therefore ought to do).
  • How well supported is the inference from fact to value or action?

Consequently the following general rules can be developed:

  • In assessing the strength of an argument that uses empirical claims one must first determine the type of study that is being cited.
  • Once the study design has been clarified consult the evidence hierarchy and locate the study design. In general, more credence can be given to designs higher on the hierarchy. Hence a well-executed systematic review is more reliable than a single randomized trial. A cohort study is more reliable than a case-control study. There are many exceptions to this rule.
  • If a randomized trial does not exist, this does not mean that there is no good evidence to back a warrant. Be careful of what standard of proof you are looking for and do not look for what is not or cannot be there.
  • The context of the argument can often indicate the strength of evidence required.
  • Always assess arguments for unstated assumptions that relate factual claims to action or belief claims.

Conflicts of evidence emerge even among systematic reviews, considered to be the most reliable form of medical knowledge. As a consequence of this, Alex Jadad and colleagues developed a decision tree to help sort out conflicts of evidence in systematic reviews.

Back to Top