In my current role as a Knowledge Services Advisor with the Green Municipal Fund at the Federation of Canadian Municipalities, I keep getting questions about how to evaluate the effectiveness of our workshops, webinars and other learning events. So, in an effort to collect my thoughts, here goes....
Some Current Models
The most widely-recognized learning evaluation frameworks was developed Phillips (Reaction, Learning, Behaviour and Results) and Kirkpatrick (who added "Return on Investment" as a measure of efficiency).
When I worked at Global Learning Partners, we taught a great course called Learning Evaluation by Design that was based on the work of GLP's founder, Dr. Jane Vella, and her colleagues Jim Burrow and Paula Berardinelli's book, How Do They Know They Know? Their work grafted what was essentially a results chain onto the Principles and Practices of Dialogue Education to develop a simple evaluation framework to assess:
- Learning: how well the learners met the Achievement-Based Objectives during the workshop;
- Transfer: how the learners apply their learning in their life, work or community; and
- Impact or the change in their organization and/or community.
For each level, they would then think through evidence of the change they would look for -- either qualitative or quantitative -- and develop a plan for collecting this data. This elegant approach mirrored a lot of what I had learned about using Results-Based Management (RBM) in community development work.
However, both of these approaches don't focus enough on assessing how the training itself was researched, designed and facilitated. Instead they look primarily "downstream" to see the results. But if the original workshop does not take into account the learners' needs, context and prior experience, or if the conference design does not follow sound adult learning principles and/or the facilitators are not skilled at supporting the learning process, there is little point in looking for downstream evidence of learning in the workplace or community.