Evaluating Learning a Seven-Level framework

In my current role as a Knowledge Services Advisor with the Green Municipal Fund at the Federation of Canadian Municipalities, I keep getting questions about how to evaluate the effectiveness of our workshops, webinars and other learning events. So, in an effort to collect my thoughts, here goes....

Some Current Models

The most widely-recognized learning evaluation frameworks was developed Phillips (Reaction, Learning, Behaviour and Results) and Kirkpatrick (who added "Return on Investment" as a measure of efficiency).

When I worked at Global Learning Partners, we taught a great course called Learning Evaluation by Design that was based on the work of GLP's founder, Dr. Jane Vella, and her colleagues Jim Burrow and Paula Berardinelli's book, How Do They Know They Know? Their work grafted what was essentially a results chain onto the Principles and Practices of Dialogue Education to develop a simple evaluation framework to assess:

  • Learning: how well the learners met the Achievement-Based Objectives during the workshop;
  • Transfer: how the learners apply their learning in their life, work or community; and
  • Impact or the change in their organization and/or community.

For each level, they would then think through evidence of the change they would look for -- either qualitative or quantitative -- and develop a plan for collecting this data. This elegant approach mirrored a lot of what I had learned about using Results-Based Management (RBM) in community development work.

However, both of these approaches don't focus enough on assessing how the training itself was researched, designed and facilitated. Instead they look primarily "downstream" to see the results. But if the original workshop does not take into account the learners' needs, context and prior experience, or if the conference design does not follow sound adult learning principles and/or the facilitators are not skilled at supporting the learning process, there is little point in looking for downstream evidence of learning in the workplace or community.

Proposing a New Framework

To address these challenges, I would like to propose a new learning evaluation framework that draws on the insights of the Dialogue Education approach to researching, designing and facilitating learning, as well as the downstream results that Phillips and Kirkpatrick and others have suggested.

It involves looking at 7 different levels of the learning process.

1. Learning Needs & Resources Assessment

A Learning Needs and Resources Assessment is simply the research that you can do prior to a learning event that would give you valuable information to inform your learning design and facilitation of your workshop, webinar, conference or other learning event. It helps you identify not only what the participants are looking to learn, but also what resources they bring that will help them learn from each other (in dialogue).

  • Ask if (!) / how the organizers conducted a Learning Needs and Resources Assessment (LNRA) either before they designed the event and/or to customize the event to this particular group of learners. What did they:
  • Ask the participants and other stakeholders about their background, current context, work, successes, challenges, questions and opinions;
  • Study to better understand these issues (e.g., reports, evaluations, websites);
  • Observe about the learners, their situation, organization or community.
  • Discuss how did this information change their decisions about the Content, Achievement-Based Objectives, Process or other design parameters?
  • Ask what else might they have found out that would have helped the design and/or facilitation?
2. Design

"Design" refers to how the event organizers laid out the content, learning tasks and other experiences to guide the participants through the course. It should state the underlying assumptions, define the parameters that are fixed (e.g. number of participants, location) and name the other decisions that the designers made during the process. To evaluate the quality of a learning design, you may want to:

  • Document the expected Theory of Change of the training program to understand how this event fits into the larger change process they are trying to support.
  • Review the design parameters as outlined in the Steps of Design:
  • People: the participants, the facilitators, Subject Matter Experts, and/or sponsoring organizations.
  • Purpose: What is the situation that calls for this learning? Why are the participants taking part?
  • Transfer & Impact Objectives
  • Date and Time for the learning event, as well as the total time after breaks.
  • Place, Venue and/or Platform (if online)
  • Content: Knowledge, Skills and Attitudes
  • Achievement-Based Objectives (ABOs) which define what the participants will have done with each element of content by the end of the learning event.
  • The Process or the Learning Tasks to help the participants meet the ABOs

Learn More About the Steps of Design via this video.

Check for congruence between these parameters. If you like, map them on a Learning Design Canvas to summarize the key parameters.

  • Note which parameters were given to them, and which ones they made decisions about and why?
  • Review the Achievement-Based Objectives to assess if they support the appropriate level of learning to support the desired Transfer Objectives.
  • Review the Process to see how the training designed embodies selected principles and practices of how adults learn most effectively.
3. Facilitation
  • Observe the facilitation to see how it embodies good adult learning facilitation principles and practices and note suggestions where it might be improved.
  • Debrief the facilitator’s own impressions of the learning process.
  • Check the design parameters vs. the actual situation (e.g. start times vs. schedule, participation rates, technology platforms) and any impromptu changes that the facilitation team may have made in light of changing circumstances or emergent learning needs.
  • Consider any other factors that influence learning during the workshop or teleconference (e.g. group dynamics, technical problems).
4. Learning
  • Observe any mid-course opportunities for personal & group learning synthesis during the workshop to assess their learning and permit making changes in response to any emerging learning needs.
  • Check how well participants met each Achievement-Based Objective (ABO) by reviewing the products of the learning activities to assess what they learned and how well.
  • Hold a short post-workshop quiz on the content if appropriate
  • Invite the participants to name their most significant learning, their progress vs. their pre-training workshop, and their transfer objectives (e.g. what they will apply to their work after the workshop, and how they will assess these)
5. Feedback

In an online survey, email or interview, Invite the participants to reflect on their experience of the learning event, possibly including:

  • publicity for the event -- how did they find out about it?
  • registration - was it easy to sign up for the event?
  • pre-workshop LNRA -- did they take part in this? did they see how the facilitators used this information?
  • design - what did they think about the design of the workshop?
  • facilitation - how did the facilitators do in guiding them through the design?
  • How would they rate their own participation in the event?
  • How would they rate the other participant's contributions to the learning
  • What was their most significant learning, experience or insight ("a-ha!")
  • What new questions they now have -- every good learning experience generates as many questions as it does answers.
  • What challenges do they anticipate in applying their learning in their life, their workplace and/or community?
  • What support would they like to apply their learning afterwards? Coaching? Resources? Peer support?
6. Transfer
  • Look for evidence that they have applied their new knowledge, skills and/or attitudes (KSAs) to their work situation through observation, reviewing plans, reading reports, etc.
  • Conduct interviews and/or focus groups to ask their feedback on the utility of the training now that they are applying what they have learned:
  • What has proven useful?
  • What not so helpful?
  • What else do you wish you had learned?
  • Analyze the contextual factors that have influenced how they have applied their learning. This could included positive forces (e.g., supplemental information, peers support), or negative forces (e.g., workload, resistance from the boss, inflexible systems). transfer including workload, support from peers, time since the training, context, etc.
  • Suggest tools and processes to reinforce their learning and enhance transfer.
7. Impact
  • Look for changes in Behaviour and/or Conditions that suggest that the transfer (application) of the new KSAs made a difference for the participant, organization, peers (e.g. volunteers) and/or community (e.g. improved volunteer recruitment, retention and satisfaction)
  • Analyze any contextual factors that influenced impact (e.g. supporting and/or hindering conditions in the community or work place, other actors). These can be both positive and negative influences.
  • Review the Theory of Change in light of the evidence and suggest changes that better display how change actually occurs. .

Credits:

Created with images by Wokandapix - "blueprint ruler architecture"

Report Abuse

If you feel that this video content violates the Adobe Terms of Use, you may report this content by filling out this quick form.

To report a Copyright Violation, please follow Section 17 in the Terms of Use.