Screen Shot 2014-10-02 at 8.17.59 AMProud to share my latest publication with you, “Beyond Statistical Significance: Clinical Interpretation of the Rehabilitation Literature.” I wrote most of this article while I was in the hospital for a few months. I hope you enjoy it.

Here’s the abstract:

Evidence-based practice requires clinicians to stay current with the scientific literature. Unfortunately, rehabilitation professionals are often faced with research literature that is difficult to interpret clinically. Clinical research data is often analyzed with traditional statistical probability (p-values), which may not give rehabilitation professionals enough information to make clinical decisions. Statistically significant differences or outcomes simply address whether to accept or reject a null or directional hypothesis, without providing information on the magnitude or direction of the difference (treatment effect). To improve the interpretation of clinical significance in the rehabilitation literature, researchers commonly include more clinically-relevant information such as confidence intervals and effect sizes. It is important for clinicians to be able to interpret confidence intervals using effect sizes, minimal clinically important differences, and magnitude-based inferences. The purpose of this commentary is to discuss the different aspects of statistical analysis and determinations of clinical relevance in the literature, including validity, significance, effect, and confidence. Understanding these aspects of research will help practitioners better utilize the evidence to improve their clinical decision-making skills.

{ 0 comments }

What is an ‘effect size’?

by Dr. Phil on September 16, 2014

While most research studies use statistical significance to reach their conclusions, clinical research studies should report on the “effectiveness” of a study. Statistical significance is of limited value when we want to determine if the treatment will have a clinical benefit.

For clinicians, the most fundamental question of clinical significance is usually, “Is the treatment effective, and will it change my practice?” The effect size is one of the most important indicators of clinical significance. It reflects the magnitude of the difference between treatment groups; a greater effect size indicates a larger difference between experimental and control groups. For example, if the experimental control group improves by 15 points, and the control group improves by 10 points, the change score is 5.

Cohen established effect values based on group differences (change score), divided by the combined standard deviation:

Change in Experimental Group vs. control  / Combined Standard Deviation of both groups

For example, if the difference between groups (change score) is 5, and the standard deviation of both groups is 10, the Cohen score (effect size) = 0.5.

Cohen quantified effect sizes in ranges, which may be positive or negative, indicating the direction of the effect:

<0.2 = trivial effect

0.2-0.5 = small effect

0.5-0.8 = moderate effect

> 0.8 = large effect

 

{ 0 comments }

Translating evidence is the key in practice

January 5, 2014

While it’s important to support evidence in clinical practice, we have more problems translating the evidence into practice. There are several great clinical studies that are difficult to implement into practice for one reason or another. Sometimes, the article itself does a poor job of explaining the specific protocol that can be reproduced (I’m amazed […]

Read the full article →

What is the value of a systematic review?

December 18, 2013

My friend Dr. Brad Edwards, an orthopedic surgeon in Houston Texas, is on the editorial board for the Journal of Shoulder and Elbow Surgery. He recently wrote a short article on the value of systematic reviews. There has been a recent surge in systematic reviews to journals, and I for one am frustrated to see […]

Read the full article →

Need help with research in your practice?

November 5, 2012

Evidence-based practice was defined by Sackett in 1996 as “integrating the best research evidence with clinical expertise, patient values and circumstances to make clinical decisions.” The Sports Physical Therapy Section of the American Physical Therapy Association (APTA) recently published a special issue in the International Journal of Sports Physical Therapy that reviews important topics in […]

Read the full article →

Most Large Treatment Effects of Medical Interventions Come from Small Studies, Report Finds

October 24, 2012

A report in Science Daily reviewed a study in the October 24/31 issue of JAMA. The study examined  the characteristics of studies that yield large treatment effects from medical interventions. The researchers found that these studies were more likely to be smaller in size, often with limited evidence. Interestingly,  when additional trials were performed, the effect sizes […]

Read the full article →

Decision Tree for Research Statistics

October 9, 2012

I like this decision tree to help us choose which statistics we should use in research. Even if you don’t perform research, it might help you as a critical appraiser of research in determining if the study used appropriate stats. Unfortuantely, some studies get published in journals without using the right statistical analysis….as an educated […]

Read the full article →

Why is Evidence-Based Practice difficult for everyday clinicians?

August 2, 2011

Part 2 of More practical Evidence-Based Practice for today’s clinician “Heterogeneity” among patient populations is key to understand. One of the problems with clinical research, and relying too heavily on it, is not appreciating the ‘sterility’ of research. The scientific method relies on homogeneity when applying results to a specific population; the scientific method also […]

Read the full article →

More practical Evidence-Based Practice for today’s clinician

August 2, 2011

Several years ago, the concept of “evidence-based practice” (EBP) made its way into rehabilitation sciences…the notion that everything we do in practice needs to have evidence to support it. That raised the question then, “what is evidence”? Practically speaking, “evidence” can give us one of three things: PROVEN, PROVEN NOT, or NOT PROVEN. In other […]

Read the full article →