The information in this section of Guidance for Educators outlines some of the basic considerations for colleagues planning, conducting or reviewing educational evaluations. There are links to more detailed information in this large area of educational practice.
Being clear about the purpose of an education evaluation is a fairly obvious first consideration. This will involve thinking about why the evaluation is needed and how the evaluation data will be used (if at all). The most common reasons for conducting an evaluation are to:
check the education project is working as expected (Did it work as expected?)
check project processes are implemented effectively (Is it being delivered properly?)
identify areas for improvement (What can we do better?)
assess value for money and/or continuing need for the programme (Was it worth it?)
satisfy a commissioner requirement (Did we do what was expected?).
If you are unable to use the data for one or more of the above purposes, is an evaluation needed?
Evaluation is integral to most educational initiatives, but can be time-consuming, complicated and costly. For this reason, you should identify at an early stage of project planning:
baseline information relating to project objectives,
the purpose(s) of the evaluation and the evaluation questions to be answered,
the best evaluation methods,
sources of data, and
any resources required for the evaluation.
Your plan should also identify who will lead the evaluation, who will collect the data and any necessary governance arrangements (e.g. project steering group/evaluation sub-group)
A key consideration for your evaluation is finding out about the effects, or impact, of the education project on services, health and social care practice, and staff learning. It is recommended that the evaluation should measure the following types of impact:
Service impact (improvements in clinical outcomes, better patient care, cost reductions, more productivity, better staff satisfaction etc)
Performance impact (staff performing new procedure, staff being more productive, staff performing at a higher standard)
Educational impact (staff demonstrate better knowledge, skills, confidence, motivation, awareness)
Engagement impact (participants find the training relevant important, and are satisfied with the training experience)
You are not expected to evaluate each type of impact for every educational initiative. The data you collect depends on factors such as the project cost and its importance to your organisation or the service.
NES has developed questions to measure the impact of technology on learning, NES Evaluation template for Learners .You are welcome to review and reuse.
When measuring technology impact it is important to capture data from each of the five essential evaluation Elements. The level of detail gathered by the questions within each section contextualises and provides the tailored and unique data relevant to your learning resource, programme or event. The data should indicate whether, where, for whom and in what way the technologies used enhanced the learning experience. This data will provide educators with sufficient information to review and improve learning resources, programmes and events as required. The order the elements appear in your evaluation is not set but questions relating to every element should be covered within the evaluation.
NES has produced a short e-learning module on the subject of planning and measuring impact - What do you think about when you think about impact? This 10-minute learning bite explains some of the basic concepts in this area using worked examples and activities.
For futher information on evaluating impact, see
Guidance and advice on evaluating impact is available from Rob Coward, Principal Educator, NHS Education for Scotland.