Why is Evidence-Based Practice difficult for everyday clinicians?

by Dr. Phil on August 2, 2011

Part 2 of More practical Evidence-Based Practice for today’s clinician

“Heterogeneity” among patient populations is key to understand. One of the problems with clinical research, and relying too heavily on it, is not appreciating the ‘sterility’ of research. The scientific method relies on homogeneity when applying results to a specific population; the scientific method also relies on controlling as many variables as possible in order to isolate an effect. Studies using patient populations with the same diagnosis often do not control for the different impairments leading to the pathology.

I’m often amazed how many clinical studies lump patients with the same diagnosis together without stratification for different impairments. Experienced clinicians treat the impairments, not the diagnosis; in addition, they know that no two patients with the same diagnosis are the same…this heterogeneity within a population forces us to adjust the application of EBP as defined previously. While it would be ideal to have clinical research generalizable to a large population, different clinical presentations of patients with the same diagnosis can make implementation of published research findings difficult.

Another limitation of clinical research is the lack of more direct, valid and reliable measures for certain clinical symptoms. One of my favorites is “proprioception”. Clinical researchers consider measures such as joint repositioning and kinesthetic awareness as valid measures of proprioception. Lephart and Fu wrote an excellent text on proprioception in 2000. Technically speaking, proprioception is likely only validly measured with somatosensory evoked potentials within the cortex! The common aforementioned clinical tests can give us a sense of the ‘processing’ of proprioception, but cannot quantify proprioception itself! Clinical research will continue to improve as technology to measure clinical entities improves.

We can’t base everything we do in the clinic on evidence alone. There is such a dearth of evidence, we would have nothing to do if we practiced true traditional EBP. I continue to be frustrated by the number of systematic reviews that conclude there’s “not enough evidence” to make a conclusion. One thing I’ve noticed over the years is that clinical research lags clinical practice by several years. We often do things in the clinic with biologic plausibility until it’s finally researched a few years later…remember that it takes several years for good quality research to be published from start to finish!

While I appreciate the push for more evidence to support our interventions, we should not base all decisions on the presence of positive or negative outcomes. “NOT PROVEN” techniques still have a place in clinical practice as long as they make sense and the “PROVEN” techniques have been attempted or ruled out.   Perhaps we should better-define EBP as “evidence-led” or “evidence-informed” practice.

Leave a Comment

{ 1 trackback }

Previous post:

Next post: