Recommended reading from Anne Taylor-Vaisey: Today's issue of JAMA contains a letter and a reply concerning the following article:
Fordis M, King JE, Ballantyne CM, Jones PH, Schneider KH, Spann SJ, Greenberg SB, Greisinger AJ. Comparison of the instructional efficacy of Internet-based CME with live interactive CME workshops: a randomized controlled trial. JAMA 2005 Sep 7;294(9):1043-51.
The study of Internet-based continuing medical education (CME) by Dr Fordis and colleagues incorporated principles of effective instructional design in both interventions and assessed physician behaviors as an outcome in education research. Unfortunately, this article demonstrates 2 flaws commonly found in education studies, both of which lead to results that defy meaningful interpretation.
First, it is impossible to know whether the observed effect on drug prescribing is due to the live Web conference, the opportunity to revisit the Web site to reinforce learning, the ability to adjust viewing based on learning preferences, or simply the greater time invested in learning by the online group. Multifaceted educational interventions have been decried because the reader cannot determine which factors, alone or in combination, are responsible for results.2-3 Such evaluations cannot be generalized beyond the setting in which the study was conducted.
Second, media-comparative research--the comparison . . . [full text by subscription]
Dr Cook questions the utility of "multifaceted" studies and reports that such studies have been widely criticized. On the contrary, multifaceted educational approaches, as distinguished from "multifactorial" studies cited by Cook,2-3 were specifically incorporated because of their effectiveness. Multifaceted interventions in CME, designs using 3 or more educational approaches, are associated with increased likelihood of positive outcomes (79%) compared with 2-method (64%) and 1-method (60%) designs. As to distinguishing the effect unique to each contributing element (eg, reinforcement, viewing preference), Cook misreads the intentions of our study. While changes in knowledge and other nonbehavioral end points have been demonstrated, no previous controlled studies have shown changes in physician behavior using Web-based interventions. The intent, therefore, was not to distinguish among the educational elements, but to determine if efficacy could be demonstrated at all. Only because efficacy was observed did we discuss at length . . . [full text by subscription]