I read this post about the trouble with trying to get meaningful information out of evaluation forms on Confessions of a Medical Educator this morning (and had to post this response to it on our Capsules blog). But while this problem is especially acute with those in continuing medical education who are required in many cases to get some meaningful data on what attendees learned and how they use what they learn, I can't imagine there's anyone involved in continuing adult education in any field who doesn't share this concern (in fact, I dug through our archives and came up with this article from Financial & Insurance Meetings providing some tips).
You may not be familiar with Moore's 7 Levels for CME Outcomes Measurement if you're not working with healthcare providers, but I think it could be adapted easily for other niches as well, and is a good starting point for how to think about measuring learning outcomes. May Dr. Moore forgive me if I end up mangling his pyramid, but my translation/added commentary for those who provide continuing adult ed outside of the medical arena would be:
1. Participation--do they show up?
2. Satisfaction--is the room temperature comfortable? The food good? The lecture not too snooze-inducing?
3a. Declarative knowledge--can they tell you want it was that they were supposed to learn?
3b. Procedural knowledge--do they now know how to do what they learned?
4. Competence--if you gave them a test or asked them to demonstrate what they learned while still at the session, could they?
5. Performance--what do they now do differently back at the office as a result of what they learned?
6. Client benefit--what has changed for the better for their clients/customers/employees now that they've made these changes in how they work based on what they learned?
7. Community improvement--how has the community at large benefited from improvements in the client population?
Pretty daunting stuff, especially when you start getting into the outer circles of levels 5-7, which is difficult enough to measure in medical settings and may in fact be almost impossible for some other professions. Still, I've never been handed an evaluation that shot higher than level 2, but I think most adult education should be shooting for--and trying to measure results--at least levels 3-4.
The always-brilliant Jeffrey Caufade has some ideas on how we can begin to get there, including being sure to ask these two questions, and to design an activity that provides learning worth making a change for. Then there's this article I wrote a while back that, while aimed specifically for CME providers, has some ideas I think anyone could use to improve the value of the evaluation process. I know there must be a ton more on this topic that I'm just not coming up with in a quick skim of the top of my head (please post links in the comments!).
What do you do to ensure your evaluations are meaningful and not just "smile sheets"? Do you (and do you want to) measure the outcomes of the education you provide to see what people actually walk away with and if/how they change what they do because of it? Or are smile sheets good enough (and is that the reason we call attendees attendees and not learners)?