Skip navigation

#CMEChat 8/10: Evaluation and Outcomes Fatigue

Outcomes fatigue, where healthcare professionals just run their pen down the form rather than provide thoughtful, meaningful answers to post-activity evaluations, is all too common. On August 10, continuing medical education professionals got together at the #CMEChat hashtag on Twitter to hash out some better approaches to measuring learning.

Led by @meducate (aka, Medical Meetings columnist Lawrence Sherman), the group began by agreeing that writing good questions is both art and science—you need to ask the right questions in the right way, and you need to ensure that the structure is in place to evoke high-quality answers. And you have to ask the right number of questions—enough to get meaningful results, but not so many that participants throw down their pens halfway through. One person suggested that providers approach the evaluation from the learners' perspectives—“why are they asking me this and what do they want to know?” Another suggestion was to “start with the end in mind.”

Another part of the problem is that it can be hard to craft effective outcomes questions if you don’t have an in-depth knowledge of the therapeutic area, as may be the case with a medical writer. But involving an expert can be time-consuming, cumbersome, and expensive, as one person pointed out.

It also doesn’t help that some providers tend have a two-part approach: Part 1 being to ask questions to fulfill the Accreditation Council for CME’s requirements; and Part 2 being to measure what’s been learned.

There also was agreement that more open-ended questions can be valuable.

Standardize or Not? The CMEChatters talked about the possible benefits of using a set number and/or type of questions. As one tweeter said, “Formulaic writing gives me the creeps,” but it does make it easier to compare data across programs when you can compare apples to apples. One person suggested that evaluation tools should at least be validated, or perhaps even standardized.

Another said a standardized evaluation tool is a must if you want to compare outcomes across programs. But as the conversation moved back and forth between outcomes and evaluations, one CMEChatter warned that everyone should remember that “outcomes is not the same as evaluations”—while they’re not mutually exclusive, they are different. As another said, “Evaluation is a process, outcomes is a science.”

Validation and Value They also talked about the limits to the value of self-reporting of intent to change without actually validating that change. But validation is another big can of worms. Some docs don’t want to provide answers that will be tracked to them or just be under the microscope by CME providers, the CMEChatters said.

Fatigue or Indifference? What is it that make so much evaluation data so distasteful for learners? Are they fatigued, or just indifferent? Have we created an environment where learners simply check boxes without thinking through each evaluation question?

As one person said, “Um, yes.”

So where does that indifference come from?

There are some “so-called learners” who aren’t really there to learn. They’re just there for the free meal or credits. If they don’t care about the learning, why should they care about the evaluation? So, as one person quipped, “How do we turn the munch bunch into learners?” While several people admonished that learners deserve more respect than this exchange implied, others said it’s important to be realistic—not everyone is there to learn.

These folks should read some of the data on the clinical impact of CME.

Creating Understanding on Why Answering These Questions Matter So, what would make a training/education so exciting that they want to be there, and to fill out the form afterward?

The problem is, they view it as jumping through hoops to get their credit. They don't realize why providers need their information, said one CMEChatter. CME providers need to involve the learners in the educational process, and they need to design the educational intervention around the notion that you want to make people want the education. Find ways to create a positive, creative, safe learning environment, said another. “So much depends on learning environment, [the] expectations set by leaders and teachers in the room.”

“If evaluation is important to you, why not make it important for persons filling it (not by force, [but by] seduction).” The provider community can show learners that there are reasons why they ask these questions, and that the answers have value. The key word, one person said, is environment, and the key concept is trust. “Do learners trust teachers to use data appropriately?”

Short of paying them, as one chatter suggested tongue in cheek, you can encourage them to reflect on what they learned through the evaluation form, so filling out the form becomes another way to cement what they learned. The only problem with reflection is when the educational activity isn’t significant enough (in a CME context) to arouse deep reflection.

One way to help overcome their reluctance to participate in outcomes validation measures would be to provide aggregate outcomes to the learners so learners could see the value of responding to those questions, said one tweeter. “If you’re not making use of the results (formatively for yourself) in evaluations, you’ll get fatigue,” added another. “Feedback to the learners is critical, but often omitted,” a CMEChatter agreed.

Or you could just get rid of the form altogether and follow the lead of one of the CMEChatters, who uses coaching to monitor learning process.

Post-Activity Evaluation Practice Pearls Here are a few of the pearls of wisdom the CMEChatters imparted at the end of the chat:

• Give the learners a framework to design their own CME.

• Don't leave the form to the last minute—plan it when you plan the intervention.

• Include open questions so that you're not eating your own dog food all the time.

• Instead of focusing so much on assessment, maybe the focus should be placed more on the development of the earning environment.

Don't miss the next #CMEChat, coming to your desktop Wednesday, 8/17, at 11 am Eastern.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish