We added two new videos to clarify and expand upon some concepts. They are both found on the Power, Type II Error and Beta page. They speak specifically to the
- difference between clinical vs statistical significance and
- why confidence intervals are better than p-values (and how to interpret CI’s).
If you have questions, please put them in the comments. Thanks!
Beth and Rahul
Clinical versus Statistical Significance
Clinical significance is different from statistical significance. A difference between means, or a treatment effect, may be statistically significant but not clinically meaningful. For example, if the sample size is big enough, very small differences may be statistically significant (e.g. One pound change in weight, 1 mmHg of blood pressure) even though they will have no real impact on patient outcomes. So it is important to pay attention to clinical significance as well as statistical significance when assessing study results. Clinical significance is determined using clinical judgment as well as results of other studies which demonstrate the downstream clinical impact of shorter-term study outcomes.
Why confidence intervals are better than P-values
Now that we’ve looked at hypothesis testing, confidence intervals and the difference between clinical and statistical significance, we can examine the difference between p-values and CI’s. Most journals report a point-estimate of what they’re trying to measure with a p-value. This gives you an idea of whether this point-estimate is statistically significant.
Many journals are now requiring the reporting of confidence intervals over p-values. CI’s give you additional information:
- First they give you an idea of the precision of the point-estimate. You’re given a range. A huge range is worse than a nice, tight, narrow range.
- Secondly they give you an idea of whether the estimate is clinically significant. If the extreme ends of that range include values that clinically are important, then the study found a clinically significant difference.
Reporting only p-values and point-estimates eliminates this extra (and very useful) information.
The other point covered in this video is the concept of the point of no difference to determine statistical significance. There are two ways to compare numbers: subtraction and division.
- In subtraction type comparisons, the point-of-no-difference will be zero. One thing minus the same thing equals zero. If your confidence interval includes zero, the study is not statistically significant. You can recognize subtraction type comparisons by words such as “difference” or “reduction” (e.g., “risk reduction”).
- In division type comparisons, the point-of-no-difference will be one. One thing divided by the same thing equals one. If your confidence interval includes one, the study is not statistically significant. You can recognize these division type comparisons by the word “ratio” (e.g., “odds ratio” or “risk ratio”).