Skip navigation

Alliance meeting day 2: Practical strategies for better outcomes

I missed the beginning of this one, but what I experienced of the rest was terrific. Led by Carol Havens, MD, and Philip Bellman, MPH, both with Kaiser Permanente Northern California, they went through why good outcomes begin with good needs assessment (if you don't know where you're going, how do you know when you've arrived?), and the need to link objectives to outcomes, use multiple interventions, measure outcomes in multiple ways over time, and use outcomes to identify future needs. CME can help "move the big dots," they said, including quality, functional outcomes, mortality rates, patient safety, and a host of other things that everyone has their eyes on when it comes to improving patient population health. Outcome measurement not only leads to better education, demonstrates the value of the CME office, and helps determine future education—practically speaking, it also is mandated by ACCME, and positive outcomes can lead to exemplary accreditation.


Level 1 outcomes, what they called the "smile sheet," just rates the quality, usefulness, objectives, presentation, faculty, and coffee quality of the activity. It doesn't provide much on meaningful outcomes, though. They gave an example of a monthly one-hour videoconference where quality and usefulness were pretty consistently in the 90 percent to 95 percent range. But when they measured Level 2 outcomes (changes in knowledge, attitudes, or skills) for the same program, using pre- and post-tests, skill observation, and commitment to change agreements, 48 percent to 74 percent said they either will change or considering changing, while 15 percent to 31 perent said they already practice this, and 3 percent to 23 percent said it didn't apply to what they do. They also asked those who said they were going to change to list two things they intend to do differently as a result of the program.


For level 1, you'll usually get a 3.5 on a 4-point scale, and a 4.5 on a 5-point scale, said Bellman. "I can pretty much predict exactly the results you'll get before the activity even takes place." So, that's not very useful. While level 1 "usefulness" ratings are somewhat useful, they aren't predictive of what people will actually use. The level 2 question that measures their intent to change, though, is both useful and predictive, since intent to change correlates highly with actual change, Bellman said.


Level 3 (self-reported behavior change) involves a followup assessment of implemented practice change. This provides both intended and unintended consequences of CME, and can document impact of CME on practice behavior, though it does tend to be subjective since it's self-reported. "We ask them if they changed something, and what they changed," said Bellman. This can be done via Web or mailed surveys, and phone interviews. (More on increasing response rates to come later.)


For one CME program on managing obesity, 54 percent had said they intended to make a change in their practice (level 2). After one month, 45 percent said they measured BMI "more frequently."


Level 4 kicks it up a notch by objectively measuring change in practice using quality measures, utilization measures, HEDIS, JCAHO, NCQA measures, screening, diagnostic, and screening rates, patient satisfaction measures, community public health data, and any other objective measures you can get your hands on. While level 4 measures can help assess needs and chart post-activity progress, they may not capture the breadth or complexity of new behaviors, and it can be hard to pick out individual data from that of a large practice group. Level 4 can measure things like increased chlamydia screening and appropriate prescribing of asthma medication, for example.


In an in-person and Web-based CME intervention to improve patient-provider communication and increase the use of electronic patient care records, the presenters found that 80 percent said they would increase the use of DV screening tools in the level 2 measure, and DV diagnosis rose 5.4 percent after three months.


The esoteric and elusive level 5 outcomes measure, which objectively measures change in treatment outcomes or population health status, tracks the net effect of practice change on patients and target populations. It also can be hard to measure or obscured by co-morbidity or other factors, the presenters said. Using sources like morbidity and mortality rates, incidence of secondary complications, and hospitalization and re-hospitalization rates, level 5 can measure things like a decreased risk of cardiac death, increased survival of HIV patients, and decrease in smoking rates. You can use level 5 outcomes to summarize change for key stakeholders, examine the intended and unintended impacts of the activities, and use the data to assess the need for future activities. "Don't just document and put it in the CME files," they warned. "Find out who the outcome data is important to, and let them know what the data is."


Levels 1 and2 are immediate measures, 3 goes in 1-3 months, and levels 4 and 5 measure impact six to 12 months later, they said.


The presenters recognized that doing high-level measurements for all programs just isn't practical, or even possible, for most providers. "We'd like all our programs to be level three or above, but we do the best we can," they said. To find the time and resources to do outcomes measurement, prioritize your resources, and use the measures on high-impact programs that address organizational needs. They even suggested that the way CME providers have been proving their worth by holding more and more activities isn't working to their benefit. Do fewer programs, and make sure the ones you do are high impact, they suggested. "If you only do 10 a year, but they have a big impact on patient health, you'll prove your worth."


Barriers: A big one is that docs don't respond to surveys. The presenters had a few suggestions to increase response rates. While they do both electronic and paper surveys, they say not to rely on electronic because most docs still don't respond very well to them. Don't survey every program, either, just the ones that matter, so they don't get survey fatigue. Also, when they come to the next activity and are a captive audience, ask what they did with the last month's information. Another tip: Ask your physicians to send the request for info, because other docs are more likely to respond to someone they know. Keep the survey short, too. "It shouldn't take more than two to five minutes," and test it on colleagues first to make sure the wording is clear. Publicize the data, too, so they know what they're doing matters, which makes for an incentive to respond.


My favorite tip was the "lumpy envelope" one: Put a piece of candy in the envelope along with the survey; they'll open it just to see what the lump is. Once they open it, they're somewhat engaged with it, and you have a better shot at them filling it out.


One audience member pointed out that its hard to link outcomes to a specific educational intervention, when there are so many other factors that may have led to the behavior change. The presenters acknowledged that this is a problem, and it's hard to know the impact of each intervention. But, since what you really care about is improved patient outcomes, it doesn't really matter all that much. "If you can identify that an educational intervention is needed, you provide that intervention, and change happens, you can rightfully say you were at least part of the solution," said Havens.


I have to add that both the presenters were excellent, really lively and engaging, and I'm totally not doing them justice here.


I'm going to take a break and go walk around a bit, now that the sun's out. I'll try to post some more stuff in a bit. Again, please forgive my typos!

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish