Assessing the effectiveness of your online training - what to do next
The first three parts of this series:
In this final part, we’ll tie up some loose ends by providing ideas about what, specifically, you might choose to change once you’ve evaluated your programme.
The main aim of evaluation is not to judge or criticise, or even to validate a brilliant learning programme. It is to look for improvements.
Improvement involves setting realistic and relevant goals. Realistic, because perfection is unachievable and it is more practical to make a series of small adjustments whose effects are easy to measure. And relevant because the goals should be closely related to the organisation’s goals.
The classic formula for goal setting is the acronym SMART, which suggests that goals should be
Every organisation is different, and the goals you choose will be unique to your own programme. The following are suggestions designed simply to provide ideas:
- Improve engagement by adding interactivity, video and graphics, or gamify your content by encouraging participants to compete against themselves or within groups
- Track learning more effectively by paying attention to chosen metrics such as page views, learner persistence or test results
- Increase uptake by publicising your training, making it available in more languages or opening it up to more of your organisation
- Research training and learning needs in greater depth, then adapt your content accordingly
- Make your programme more social by allowing interaction and discussion between participants or between learners and tutors
- Review unpopular content, then set a target to raise learner approval rates by a given percentage
- Make content more findable, allowing participants the freedom to learn in their own way rather than following a specific path
- Learners often go off-piste and dig out useul additional content from the web, so build a way they can share resources such as e-books, video or podcasts
The download here is a template in Word format for a post-course survey in two parts: the first part is a series of statements which respondents can agree or disagree with on a 5-point Likert scale. The second part consists of open-ended questions allowing respondents to give their opinion in greater detail. Feel free to use and adapt as needed.
Other evaluation models
We’ve focused on Kirkpatrick’s four levels of evaluation in this series. But there is a good deal of alternative research. Professor Keithia Wilson of Griffin University has a useful list of other options:
- Jack Phillips' Five Level ROI Model
- Daniel Stufflebeam's CIPP Model (Context, Input, Process, Product)
- Robert Stake's Responsive Evaluation Model
- Robert Stake's Congruence-Contingency Model
- Kaufman's Five Levels of Evaluation
- CIRO (Context, Input, Reaction, Outcome)
- PERT (Program Evaluation and Review Technique)
- Alkins' UCLA Model
- Michael Scriven's Goal-Free Evaluation Approach
- Provus's Discrepancy Model
- Eisner's Connoisseurship Evaluation Models
- Illuminative Evaluation Model
- Portraiture Model