Skip navigation
CME 911: Ask the Right Post-Activity Test Questions

CME 911: Ask the Right Post-Activity Test Questions

While often overlooked, good question writing for pre- and post-CME tests is key to measuring the education’s impact.

Management consultant Peter Drucker once famously said, “If you can’t measure it, you can’t manage it.” And while we don’t seek to “manage” our learners through our continuing education programs, we all spend considerable time measuring the impact our education has on their knowledge, competence, and performance. The most frequently used measurement tool in our arsenal is the pre-/post-activity test question. It’s easy to overlook the care that must be taken when developing these and the skill involved in good question writing.

For this column, Erik Brady, PhD, CCMEP, director of analytics, reporting, and outcomes at Clinical Care Options talks through the challenges of question development in CE and the ideal ways we should use our limited opportunities to assess the quality of the education we provide.

From:Scott Kober
To: Erik Brady; Cathy Pagano
I routinely review early drafts of content for our certified activities. Last week, a draft of an online case study, written based on input from three faculty experts, filtered into my inbox.

The case itself was excellent—it was well designed, covered all of the areas we had identified in our initial gap analysis process, and included some interesting dialogue among faculty members. There was just one problem—the pre-, intra-, and post-activity questions stunk.

I struck one down because it was too easy, another because the question was never actually answered in the activity itself, a third because the logic behind the question itself was flawed, and a fourth because it focused on an unimportant detail of data. Several others required minor rephrasing or changing one of the multiple-choice response choices. I don’t think I left a single one untouched.

Now I recognize that there is a skill to question-writing and that individuals who excel at developing comprehensive, interesting, evidence-based content are often challenged when they need to develop appropriately worded and focused questions for an activity. But we are an industry that centers its outcomes, to a large extent, on our ability to test learners’ absorption of our education, and too little attention is paid to the tool we use to generate the data—i.e., the questions themselves.

So Erik, my questions to you are these:

1.    What sort of guidance do you provide for individuals who are tasked with the initial development of activity questions?

2.    How easy or difficult should we be making our knowledge- or competence-based questions? Is it our job to actually challenge learners before allowing them to obtain credits for completing an activity or should we let them off the hook with some “gimmes?” 

From:Erik 
To: Cathy; Scott
First off, let me say that I believe this to be a very typical situation. That said, there’s a critical question that should be asked very early on in the process of content development: “What do we expect to see the learners do differently as a result of participating in this activity?”  I’m a big believer in structuring your content kickoff to be built around that universal question. 

Asking that question of your content development team, which includes your faculty, can help bring two things into focus: first, the learning objectives, and second, the outcomes questions. If everyone who has a hand in the development of the activity understands what change we are trying to affect with a given activity, then it becomes easy to lock down the learning objectives and sharpen the content. 

As to your second question, our goal with outcomes assessments shouldn’t be to try to trick the learner. We should strive for an accurate measure of the impact of participating in the activity.  

“Easy” or “difficult” is a subjective issue. Sometimes we miss the mark with an outcomes question and 80 percent-plus select the optimal response at baseline. In such a case, it’s likely a reflection of a “miss” in the characterization of a learning need. 

From:Scott
To: Erik; Cathy
So is the 80 percent figure you noted as a trigger for a question that is likely either too easy or tied to a misaligned learning objective based on any sort of industry standard, or was that just an intuitive figure you picked out? Personally, when I am reviewing results from pre-activity test questions, I look for a correct response somewhere in the range of 30 percent to 60 percent with a spread of responses across multiple options. To me, that shows that the question was appropriately challenging and that our suspicion that this is an area where there is a gap in knowledge or competence was correct.

From:Erik
To: Cathy; Scott
The 80 percent number is arbitrary. I like your explanation. Ideally, an optimal response at baseline that is somewhere in the range of 30 percent to 60 percent, with a spread across the other answers likely shows a well-balanced and appropriately designed question item.

From:Cathy
To: Scott; Erik
I’m right on board with you both. Post-activity tests should always align with the goals of our programs so that the data becomes valuable for our industry as a whole in terms of demonstrating the value of CME for all stakeholders. However, post-activity question-writing is an art. I have seen some doozies in my years of experience (of course, none that I wrote!).

As you have both alluded to, just asking learners if they can recite facts from the material is not good enough. We have to make sure they can appropriately use their newfound knowledge. Consequently, if we do an exemplary job at designing and disseminating the education, all the questions after the activity should be “easy,” right?  

From:Erik
To: Cathy; Scott
For the engaged learner, I would agree that selecting the optimal answer should be fairly straightforward at post-activity test, although I don’t think that a comparatively “low” response necessarily indicates that we’ve done a poor job with our education. A lower-than-ideal post-activity test score may just indicate that the set of learners needs additional reinforcement on the topic. It’s quite possible that the consensus view that the faculty believes should be observed is just not ready for broad incorporation into community practices for some reason.

One way to test for ambiguity in the question item is to validate the item by piloting the question with a small subset of learners and asking them to critique the item. If your test group indicates that the item is unclear, then you have some basis for going back to your question author and requesting an edit.

From:Cathy
To: Scott; Erik
You’re so right.  There’s a huge difference between an engaged learner and one who is just going through the motions to get credit!   

From:Scott
To: Erik; Cathy
I am sure we have all seen plenty of “engaged” learners who doze off at large satellite events but confidently fill out their request for credit forms at the end. They are always my favorites.

Scott Kober, MBA, CCMEP, is director of content development, and Cathy Pagano, CCMEP, is president of the Institute for Continuing Healthcare Education in Philadelphia. Erik Brady, PhD, CCMEP, is director of analytics, reporting, and outcomes at Clinical Care Options.

 

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish