Review and improve

Add to favourites

Reviewing your learning resources and making improvements.

Evaluating education and training

The information in this section of Guidance for Educators outlines some of the basic considerations for colleagues planning, conducting or reviewing educational evaluations. There are links to more detailed information in this large area of educational practice.

Purpose(s) of evaluation

Being clear about the purpose of an education evaluation is a fairly obvious first consideration. This will involve thinking about why the evaluation is needed and how the evaluation data will be used (if at all). The most common reasons for conducting an evaluation are to:

  • check the education project is working as expected (Did it work as expected?)

  • check project processes are implemented effectively (Is it being delivered properly?)

  • identify areas for improvement (What can we do better?)

  • assess value for money and/or continuing need for the programme (Was it worth it?)

  • satisfy a commissioner requirement (Did we do what was expected?).

If you are unable to use the data for one or more of the above purposes, is an evaluation needed?

Planning evaluation

Evaluation is integral to most educational initiatives, but can be time-consuming, complicated and costly. For this reason, you should identify at an early stage of project planning:

  • baseline information relating to project objectives,

  • the purpose(s) of the evaluation and the evaluation questions to be answered,

  • the best evaluation methods,

  • sources of data, and

  • any resources required for the evaluation.

Your plan should also identify who will lead the evaluation, who will collect the data and any necessary governance arrangements (e.g. project steering group/evaluation sub-group) 

Impact evaluation

A key consideration for your evaluation is finding out about the effects, or impact, of the education project on services, health and social care practice, and staff learning. It is recommended that the evaluation should measure the following types of impact: 

  • Service impact (improvements in clinical outcomes, better patient care, cost reductions, more productivity, better staff satisfaction etc) 

  • Performance impact (staff performing new procedure, staff being more productive, staff performing at a higher standard) 

  • Educational impact (staff demonstrate better knowledge, skills, confidence, motivation, awareness) 

  • Engagement impact (participants find the training relevant important, and are satisfied with the training experience) 

You are not expected to evaluate each type of impact for every educational initiative. The data you collect depends on factors such as the project cost and its importance to your organisation or the service.  

Evaluating learning with technology

NES has developed questions to measure the impact of technology on learning, NES Evaluation template for Learners .You are welcome to review and reuse. 

When measuring technology impact it is important to capture data from each of the five essential evaluation Elements. The level of detail gathered by the questions within each section contextualises and provides the tailored and unique data relevant to your learning resource, programme or event. The data should indicate whether, where, for whom and in what way the technologies used enhanced the learning experience. This data will provide educators with sufficient information to review and improve learning resources, programmes and events as required. The order the elements appear in your evaluation is not set but questions relating to every element should be covered within the evaluation. 

  1. Detailed programme description - Before you distribute your evaluation, add the details of the technology or technologies used within your programme, learning resource or event (e.g. Teams, GoTo, Webinar, Moodle etc). This can include enagement technologies learners will encounter (e.g. Breakouts, modules, Polls, Q&A, whiteboards and liberating structures) Add the method or way in which the learning resource, programme or event was delivered (e.g. in a blend of in person and at distance technologies, in real time or asynchronously, online or hybrid events)
  2. Particpant/ Learner information - The questions in this section gather information about the learners themselves including job role, confidence / prior experience in using the technologies, whether digital skills were checked pre programme/location for learning. This level of detail ensures that the results are specific enough for your learning resources, programmes or events. It is also important to gather information on where the majority of learning was undertaken. This helps detail potential problems with connectivity in different settings, access to hardware or supported learning time, all of which may have an impact on your programme efficacy.
  3. Usability - This section is required to capture specific data relevant to audience accessibility for hardware/software/IT support availability/reliability of connectivity and confidence in using the technologies. This data will be needed to pinpoint issues or barriers to learning that can be resolved and to improve accessibility for learners as required. This section also gathers information on whether the technology did or did not enhance social learning aspects (i.e. use of technology to support joint learning activities/social aspects of learning and ongoing interaction and knowledge sharing beyond programme, module or learning event completion). It also includes a question on learners perception of the tutors skill in using the technology as this could be a potential enhancement or detraction from the overall quality and learner experience in using the technologies to enhance learning.
  4. Learning and Training Outcomes - This section gathers information on standard education evaluation elements. This is an important section as we need data to measure the efficacy of the learning resource, programme or event content. You may also wish to ask about any assessments or checks included and how well they supported the learning outcomes. It also gathers information on the use of technology to support learning outcomes/use of technology to assist with peer to peer and peer to tutor communication/post programme knowledge exchange, educational content detail, assessment of subject knowledge pre, during and post programme.
  5. Satisfaction and Experience - This section gathers the learner’s perception on the technology as a useful support, good use of time and resource. It also gathers information on the learners enjoyability in using technology to support learning within the programme, module or learning event. Enjoyability is evidenced as an important factor for enhancing learning. Open text and comment boxes are helpful to add as learners offer further information on their opinion of the overall quality of the learning resource, programme or event, make suggestions for improvements and comment on both the programme and the evaluation form.

Further Information

NES has produced a short e-learning module on the subject of planning and measuring impact - What do you think about when you think about impact? This 10-minute learning bite explains some of the basic concepts in this area using worked examples and activities.

For futher information on evaluating impact, see

Guidance and advice on evaluating impact is available from Rob Coward, Principal Educator, NHS Education for Scotland.