Evaluating Papers with the Users’ Guide

The JAMA Users’ Guide to the Medical Literature posed a series of questions to evaluate papers exploring different questions. The questions can evaluate the harm of an exposure, the benefit of a treatment, the quality of a diagnostic test,

The questions generally fall into one of three main categories.

  1. Are the results valid?
  2. What are the results?
  3. How can I apply the results to patient care?

Evaluating Papers on Harm

Within each of these categories, there are specific questions based on the type of paper you are evaluating. In this case, these are:

  1. Are the results valid?
    • Did the investigators demonstrate similarity in all known determinants of outcome?
    • Did they adjust for differences in the analysis?
    • Were the exposed patients equally likely to be identified in the two groups?
    • Were the outcomes measured in the same way in the groups being compared?
    • Was the follow-up sufficiently complete?
  2. What are the results?
    • How strong is the association between exposure and outcome?
    • How precise is the estimate of the risk?
  3. How can I apply the results to patient care?
    • Were the study patients similar to the patient in my practice?
    • Was the duration of the follow-up adequate?
    • What was the magnitude of the risk?
    • Should I attempt to stop the exposure?

There have been several questions on the difference between risk ratios and odds ratios. Here’s a video to help understand the difference. This is. Pretty important concept, so be sure you understand it and let us know if you have questions.

Hazard Ratios

Papers on Treatment

Studies with questions on treatment are usually performed via randomized controlled trials. We will again be using the Users’ Guide questions to analyze the paper. Again the questions fall into these three categories:

  1. Are the results valid?
    • Did the control and experimental groups begin with a similar prognosis?
      • Were Patients Randomized?
      • Was Randomization Concealed?
      • Were Patients Analyzed in the Groups to Which They Were
      • Randomized?
      • Were Patients in the Treatment and
      • Control Groups Similar With Respect to Known Prognostic
    • Did the experimental and control groups end with a similar prognosis
      • Were Patients Aware of Group
      • Allocation?
      • Were Clinicians Aware of Group
      • Allocation?
      • Were Outcome Assessors Aware of Group Allocation? Was Follow-up Complete?
      • Was follow-up
  2. What are the results?
    • How Large Was the Treatment Effect?
    • How Precise Was the Estimate of the Treatment Effect?
    • When Authors Do Not Report the Confidence Interval
  3. How can I apply the results to patient care?
    • Were the Study Patients Similar to the Patient in My Practice?
    • Were All Clinically Important Outcomes Considered?
    • Are the Likely Treatment Benefits Worth the Potential Harm and Costs?

Papers on Diagnosis

Three areas we need to explore when looking at making diagnoses are

  • Probability and Odds
  • Bayesian Analysis
  • Likelihood Ratios

Systemic Reviews and Meta-Analyses

This time we will use the Sackett rubric for analyzing systematic reviews. We haven’t covered systematic reviews before so please watch this video before coming to the session.