NHSScotland blogs

Add to favourites

NHSScotland blogs

A Human Factors Vision for Health and Social Care

Our NHSScotland developments for embedding Human Factors principles and methods in everyday work, professional practice and at all levels of education and training have been heavily informed by a recently published White Paper for healthcare prepared by the UK Chartered Institute for Ergonomics and Human Factors.

Led by Professor Sue Hignett and Dr Alex Lang, the White Paper outlines a vision to: 

  • Broaden the scope of Human Factors understanding in health and social care
  • Guide understanding of shared aims and offerings from partner organisations
  • Promote the integration of Human Factors to optimise human (patients and staff) wellbeing and overall system performance
  • Raise awareness of the discipline as an accredited, professional career
  • Ensure and maintain the standard of Human Factors practice through demonstration of competence and experience
  • Encourage the contribution of professional (qualified) Ergonomists & Human Factors Specialists via consultation and employment
  • Champion an accessible, user-focused approach 

To read more, please download the White Paper here.

A safety checklist for general practice: help or hindrance?

In this short blog, Paul Bowie debates some of the challenges around the design, implementation and practical usefulness of checklists in healthcare generally but with a particular focus on general medical practice

“Checking” is endemic in healthcare the world over. It is a routine everyday activity in all care settings and is fundamental to maintaining patient safety. Checking tasks go well in the great majority of cases and do contribute to a successful outcome for the patient and staff. However, sometimes they do not, and this may lead to things going wrong that contribute to patient safety incidents – circumstances where patients could have been or were harmed. Unfortunately, some of these incidents can and do have a devastating impact on the wellbeing of patients and families, as well as the care professionals involved.

Minimising the risks of such occurrences is obviously a major safety priority. Even so, this can be very difficult to achieve given the sheer complexity of care systems and the everyday complexities and constraints that care teams often work within (eg high workloads, increasing patient demand, and limited resources). The skill and dedication of care professionals in constantly coping and adapting to these ever-changing circumstances convincingly explains why things normally go well for patients, but also why sometimes they do not (see the work of Erik Hollnagel for example www.erikhollnagel.com).

The use of “checklists” is promoted as one approach to standardising working practices to make care systems more reliable and contribute to patient safety - the relationship between ‘reliability’ and ‘safety’ will be the subject of a forthcoming blog.

A checklist is a cognitive tool that may help care teams to ensure that safety critical tasks (including communicating effectively with each other) are actually carried out. Perhaps the most well-known example in healthcare is the WHO Surgical Safety Checklist which is now mandated in over 150 countries worldwide.

Barriers to checklist use
The safety checklist is often seen as a simple, practical solution – even a panacea – in making sure that highly important patient care tasks are actually performed and performed on time. The evidence base, however, shows a different reality. Checklist adoption by staff groups and subsequent impacts on enhancing patient safety are fairly mixed. In some ways this is no great surprise. In the risk management world, the “hierarchy of intervention effectiveness” rates human behaviour dependent interventions such as checklists, doublechecks, and reminders towards the bottom of its scale.

A range of reasons for this are now apparent. Prime among them is that the checklist can be viewed as an inadequate “technical fix” to what is a “complex socio-cultural problem”. In other words, individual “checking” behaviours and intentions are influenced by the “social group” that people belong to, as well as local healthcare practices, values, beliefs, and traditions. Therefore, adoption is often dependent on how seriously the issue of “checking” is taken within a team or organisation, particularly in complex and dynamic working environments. Potential users may also resist or feel threatened by checklists because they are perceived to replace their expertise or decision making or oversimplify the complexity of work.

Checklist success factors
On the other hand, checklist success is associated with a number of important factors in combination; it will have limited impact as a solo intervention. Firstly, it needs the commitment and the support of healthcare leaders and local promotional champions. Success is also more likely where step-by-step instructions for simple or straightforward technical tasks are necessary and where staff already know that variations in checking performance exists. There is also the fact that reliance on human memory is a problem in a busy working environment.

Importantly, any checklist should be designed for flexibility to enable users to apply “common sense” judgements, otherwise it will be considered an irritation and remain unused. Checklists that are externally imposed and lack adaptability to suit local contexts can struggle to be fully accepted and implemented effectively. Ultimately, it should have a greater chance of adoption and sustainability where there is frontline consensus that checklist use is highly relevant, that it is feasible to use routinely and, most critically, is an improvement on how work is currently done.

Introducing MoRISS - A safety checklist for general practice?
So, given all that we know about checklists and their impacts in secondary care, why embark on such a development for the UK general practice setting?

Well, firstly there is a potential “checking problem” in general practice. This is not at all surprising given workload pressures, the complexity and uncertainty of care, and the volume of checks that are required to be carried out on a daily, weekly, monthly, quarterly, and annual basis to help the team run the practice safely and efficiently. General practice managers and nurses in particular will all attest to that!

Although the safety evidence base is limited in this area, most who work there will be able to recall different incidents happening (and recurring) that involved a failure to check safety critical issues, or (probably more likely) when necessary checking tasks were not performed on time. Examples would include: 

  • patients with the same name being mixed up,
  • emergency drugs being out of date when required,
  • employing clinicians who are not currently registered to practise,
  • test results not being communicated to patients
  • emergency equipment not working or adequately calibrated,
  • IT systems not being routinely backed up, and so on.

 

Secondly, although numerous safety checks are always carried out, it often seems to be done in an ad hoc manner – that is, there is a lack of standardised, timely and consistent checking processes in many general practices. Again, this is unsurprising. Most practices will have limited knowledge or experience of taking a ‘systems approach’ to identifying and routinely checking safety issues of importance, measuring performance and implementing any necessary improvements.

As a starting point to understanding this issue more clearly, NHS Education for Scotland worked closely with GP managers, nurses, and doctors to identify and prioritise a comprehensive range of safety hazards across the whole working environment. This in turn informed the participatory design of an integrated checklist – MoRISS (Monitoring Risk and Improving System Safety) - learn.nes.nhs.scot/1032/patient-safety-zone/patient-safety-tools-and-techniques/moriss-checklist

Although we’ve labelled it a “checklist” it would probably be more accurately defined as a global checking system. The consensus is that it would need to be applied at least every four months. Pilot testing estimated that this would take around two hours to complete, which was deemed feasible and is arguably more manageable when compared to some checking processes that are in place.

A way forward?
It is perceived to be a very necessary intervention by those involved in the initial and most recent development and testing studies, with many frontline practitioners and safety improvement decision makers, who have since attended related workshops or conference presentations, also agreeing. The RCGP clearly recognises its potential value and have included it in their recently launched national patient safety toolkit.

To some extent the most straightforward part has been achieved. The real difficulty comes in developing the checklist further to make it easier to use and implement in busy practices – this is currently under discussion with colleagues in Healthcare Improvement Scotland. The idea of using a tablet or similar device to assist users is being given serious consideration, although introducing another technology potentially raises a set of other problems.

Importantly, how the checking process is designed, promoted, implemented, used, and supported as a patient safety intervention will largely determine its fate – we will need to understand if and how it works and why. If most GP teams believe it to be helpful and an improvement on how everyday work is currently carried out then there is hope for success. However, if most believe it to be a hindrance then…

Prof Paul Bowie is Programme Director (Safety and Improvement) with NHS Education for Scotland. Email: paul.bowie@nes.scot.nhs.uk Twitter: @pbnes

Suggested reading:
Catchpole K and Russ S. The problem with checklists. BMJ Qual Saf 2015;24:545-549 doi:10.1136/bmjqs-2015-004431

Bosk CL, Dixon-Woods M, Goeschel CA, et al. The art of medicine: reality check for checklists. Lancet. 2009;374: 444–5.

Bowie, P., Ferguson, J., Macleod, M., et al.: Participatory design of a preliminary safety checklist for general practice. Br. J. Gen. Pract. 2015; 65(634)

Thinking differently about patient safety by Paul Bowie and Duncan McNab

In this latest Blog, Paul Bowie and Duncan McNab challenge NHS leaders, care teams, educators and policymakers to think differently about patient safety.

Introduction

In patient safety science and practice perhaps the most obvious question to ask when people are unintentionally but avoidably harmed is: why did things go wrong?

There is, however, an alternative and equally intriguing question: why do things go right most of the time? We might further add: …and most especially in our highly complex healthcare systems?

Like most safety-critical industries worldwide, healthcare is being challenged to think differently about how we view the concept and practice of safety in the 21st Century [1]. Traditionally, the goals of patient safety are to learn from when things go wrong and also to create the conditions of care delivery that minimise the risks of patients being harmed as much as feasibly possible – a risk management principle [2] known as ALARP (As-Low-As-Reasonably-Practicable).

Most of our patient safety efforts, therefore, tend to focus on highlighting, reporting, quantifying and learning from ‘incidents’. When we seek, with hindsight, to learn as care teams or organisations, we frequently try to detect deviations from ‘ideal’ or ‘best’ practice and then design improvements to prevent or minimise the risk of future incidents – at a fundamental level this is a ‘find and fix’ mentality that aims to isolate specific ‘causal’ events (e.g. failure to communicate a test result to a patient) and rectify them so the identified incident trajectory should not re-occur.

The assumption is that unreliable technology and fallible clinicians, executives, managers and others should be treated as one in the same – as problematic system elements that either function as we intended (e.g. behave as expected or follow protocols rigidly) or do not function as we intended (e.g. ‘breakdown’, or ‘deviate’ from or ‘violate’ expected healthcare practice).

In these scenarios, ‘error’ is viewed as variability in human performance that we need to contain or eliminate [3]. Typically, we do this by developing or re-designing a protocol or procedure, or ‘fire-off’ warnings and reminders, or suggest refresher or additional training for those involved.

The goal is to increase compliance with evidence-based guidance, organisational protocols or expected professional standards, which tends to over-focus on ‘improving’ our behaviours to minimise the number of unwanted outcomes – think about how we carry out and act on recommendations from quality improvement projects or learning from adverse event meetings, for example.

Ultimately we believe, simplistically, that if all system elements, including us, behave as expected then things will not go wrong.

In recent years this type of dominant thinking in patient safety (known as Safety-I) has come under critical challenge for arguably fostering false assumptions and providing insufficient explanations about why things go wrong in complex healthcare systems - and a significant reason why we have made little progress in making care safer [4].

A ‘new movement’, known as Safety-II, has gradually emerged [5]. This perspective introduces the contrasting but compelling concepts of Safety-I and Safety-II as ways to explain why things go wrong sometimes in complex healthcare systems, but also go right in the great majority of cases.

Balancing Safety-I and Safety-II Thinking (Box 1)

In orthodox Safety-I thinking, safety is defined almost completely by the absence of something – the point where as few things as possible go wrong in everyday practice. To get to this reductionist state we examine why these ‘wrong things’ happen and attempt to repair them, often with limited success.

In contrast, Safety-II thinking aims to increase safety by maximising the number of events with a successful outcome. To achieve this means going beyond the study of adverse events to understand how things happen – good and not so good - under different conditions in everyday clinical work. We then get a more sophisticated understanding of the complexity of our work systems, which may better inform efforts to prospectively improve care quality and safety.

The Safety-II philosophy can be difficult to grasp for some with ingrained Safety-I beliefs. In essence it means accepting that the same behaviours and actions that lead to good care can also contribute to things going wrong i.e. the same decisions that lead to care successes can also lead to care failures even under similar conditions. So our everyday behaviours or actions that can sometimes lead to ‘error’ are actually variations of the same actions that more often than not produce (and may be required for) successful care outcomes. Traditionally we only focus our learning on ‘failures’ and often, more specifically, ‘the failures of people’; however, it is only with hindsight that can we see that some of our decisions contributed to failure, while most led to success.

Box 1. Comparison of Safety-I and Safety-II Thinking [3, 8]

Aspect Safety-I Safety-II
Definition of safety Absence of adverse outcomes, absence of unacceptable levels of risk. Things going right, presence of resilience abilities.
Safety management principle Reactive following incidents, risk-based, control of risk through barriers. Proactive, continuously anticipating changes, achieving success through trade-offs and adaptation.
Learning from experience Learning from incidents and adverse outcomes, focus on root causes and contributory factors. Learning from everyday clinical work, focus on understanding work-as-done and trade-offs.
Performance variability Potentially harmful, constraining performance variability through standardisation and procedures. Inevitable and useful, source of success and failure.

 

While things going wrong in healthcare are not uncommon (evidence suggests adverse events are reported in approximately one in ten hospital patients and two-to-three percent of primary care consultations [6-7]), successful clinical outcomes are obviously the norm in the vast majority of care provided. However, we cannot rely solely on analysis of the decisions and actions made in events with an unwanted outcome. Instead we should also focus our improvement efforts on learning about how and why success is usually achieved - this is at the core of Safety-II thinking.

Against this background, some key concepts related to Safety-II are briefly outlined, along with some practical pointers for care teams, leaders, educators and policymakers in thinking differently about patient safety:

Appreciate healthcare is a complex (sociotechnical) system

Healthcare performance is achieved through interactions (successful or otherwise) between human, technical, social and organisational components of the system and these interactions are rarely simple or linear – we need to move away from assuming linear ‘cause and effect’ thinking (i.e. A + B led to C) because it is largely unsuited to adequately appreciating the complexity of patient care and designing necessary improvements [9]. This may be difficult for some as it may mean in reality critically questioning and even eschewing related improvement concepts and methods such as searching for ‘root causes’ when something goes wrong, using ‘process mapping’ in attempts to specify and understand highly complex care system interactions, and applying the ‘five-whys’ technique.

Recognise that outcomes are ‘emergent’ in complex systems

In large, complex systems important outcomes such as patient safety or workforce wellbeing emerge as a result of the interactions described above [9-11]. For example, patient safety is not an inherent feature of the system – we cannot state with certainty that a care system is safe at any one time (e.g. the warfarin monitoring system or the MRI working environment); we need to recognise that it is people who largely create safety because of their dynamic ability to adapt and adjust their performance based on the system conditions faced at the time; underpinned by their skill, knowledge, experience and ingenuity, with support from technology, procedures and colleagues etc.

Rethink ‘Human Error’

Despite widespread use, we should avoid employing unhelpful terms such as ‘human error’ and its synonyms (e.g. ‘medical error’ or ‘dental error’). It is problematic because, amongst many other issues, it is fundamentally inaccurate, ill-defined, ambiguous, misleading and educationally backward, especially when it is viewed as a ‘cause’.

To continue to use these terms uncritically is arguably self-defeating and self-harming when it comes to learning, as it just continues to foment the ‘blame and shame’ culture by focusing on the person rather than the wider system [10-11]. The concept of ‘medical error’ will be a future Blog topic.

Reconcile work-as-done (WAD) and work-as-imagined (WAI)

WAD and WAI are important Safety-II concepts. WAD refers to the actual reality of how everyday work is really done i.e. how clinicians and others adapt and adjust what they do to keep patients safe and ‘get the job done’.

WAI refers to the imagined assumptions of how work is done or should be done by those - often detached from sharp-end reality – who design care processes or guidelines, manage organisations, formulate policies or regulate services [3-5].

As a simple example, think about any clinical protocol – is it used as it should be and does it really reflect how the work is actually done? Can you work with colleagues to amend this to make it more informative, useful and usable by reconciling WAI and WAD?

Consider local rationality

When looking back with hindsight at the decisions of others at some point in time, seek to understand why decisions made sense based on the system situation and context they faced (known as local rationality) [3-5]. People do not go to work to do a bad job, but at the time their decisions made sense to them, otherwise they wouldn’t have made them, so why was this and how can we learn from it?

Efficiency-Thoroughness-Trade-Offs (ETTOs)

Again, when looking in hindsight at decisions and outcomes consider and learn from the ETTOs that people made [3-5]. In highly complex systems, conditions are dynamic and people adjust what they do, which often involves making trade-offs between being efficient (e.g. signing a pile of prescriptions with a cursory check of each one) and being thorough (e.g. carefully checking and re-checking every single prescription that is signed).

Systems thinking in team-based learning from events (Box 2)

Before trying to understand and answer why something went wrong, ask what does successful everyday ‘work-as-done’ normally look like in this situation? In this way, you can begin to reconcile both perspectives to get a more informed picture of the system of care you are trying to learn about and potentially change and improve [10-11].

Box 2. Some systems thinking pointers [10-11]

  • Start by understanding and describing current systems.
  • What does work-as-done look like?
  • How does every day work usually lead to success?
  • Consider the whole system: are there key functions that need to be completed in a certain way? If so, this may be an area for checklists or specified criteria.
  • Are there areas where a variety of responses would be beneficial? If so, how can staff be helped to make the correct decision?
  • How can variability be managed? Consider the interactions between staff and with technology – can this be simplified or strengthened to improve co-ordinated working?

For further resources please access:

Safety-I and Safety-II: www.england.nhs.uk/signuptosafety/wp-content/uploads/sites/16/2015/10/safety-1-safety-2-whte-papr.pdf

Systems Thinking for Everyday Work: learn.nes.nhs.scot/6027/patient-safety-zone/safety-skills-and-improvement-research-collaborative-skirc/stew-model/systems-thinking-for-everyday-work-stew

REFERENCES

[1] Mannion R and Braithwaite J. False dawns and new horizons in patient safety research and practice. Int J Health Policy Manag 2017; 6: 685–689.

[2] Health and Safety Executive. www.hse.gov.uk/risk/expert.htm [accessed 31st July 2019)

[3] Hollnagel, E. Resilience engineering: A new understanding of safety. Journal of the Ergonomics Society of Korea, 2016; 35(3), 185-191.

[4] Braithwaite J, Wears R, Hollnagel E. Resilient health care: turning patient safety on its head. Int J Qual Health Care.2015;27: 418–420.10.1093/intqhc/mzv063

[5] Hollnagel E. Safety-I and Safety-II: the past and future of safety management. Surrey: Ashgate; 2014.

[6] To Err is Human: Building a Safer Health System. Institute of Medicine (US) Committee on Quality of Health Care in America; editors; Kohn LT, Corrigan JM, Donaldson MS, editors Washington (DC): National Academies Press (US); 2000.

[7] Panesar, Sukhmeet Singh, deSilva, Debra, Carson-Stevens, Andrew, Cresswell, Kathrin M, Salvilla, Sarah Angostora, Slight, Sarah Patricia, Javad, Sundas, Netuveli, Gopalakrishnan, Larizgoitia, Itziar, Donaldson, Liam J, Bates, David W and Sheikh, Aziz 2016. How safe is primary care? A systematic review. BMJ Quality & Safety25 (7) , pp. 544-553

[8] M. Sujan (2018). A Safety-II Perspective on Organisational Learning in Healthcare Organisations. Int J Health Policy Manag, doi 10.15171/ijhpm.2018.16

[9] Plsek P and Greenhalgh T. The challenge of complexity in healthcare. Br Med J 2001; 323: 625–628.

[10] McNab D, Bowie P, Ross A, Morrison J. Understanding and responding when things go wrong: key principles for primary care educators. Educ Prim Care. 2016;27: 258–266

[11] McNab D, Bowie P, Morrison J, Ross A. Understanding patient safety performance and educational needs using the 'Safety-II' approach for complex systems. Educ Prim Care. 2016 Nov;27(6):443-450.

New Guidance on Learning from Adverse Events

The recent publication of a cross-industry White Paper on Learning from Adverse Events by the Chartered Institute of Ergonomics and Human Factors (CIEHF) presents a significant opportunity for health and social care educators, teams and organisations to think differently about our patient safety learning practices.

The purpose of the new guidance document is twofold:

1. To help organisations understand a Human Factors perspective when investigating and learning from when things go wrong; and
2. To provide key principles that can be applied to help capture the human contribution to adverse events.

While the target audience is safety-critical industries in all sectors, it is specifically aimed at those that do not employ professionally-trained Human Factors specialists. In the NHS there are estimated to be less than 10 of these specialists working in embedded frontline practice in arguably the most complex, safety-critical industry that has ever existed, and with a workforce upwards of 1.2 million. In comparison a pan-European air traffic control body with around 7500 staff employs c40 Human Factors specialists.

The guidance is timely, therefore, and should be of strong interest to many in health and social care, particularly those leading on patient safety education and training, organisational and team-based learning from events, and those advising on the design and implementation of safety improvement activity and policy at all levels.

Based firmly on fundamental Human Factors principles, the White Paper offers guiding statements that are reflective of good practice in organisational and team-based learning from events. A few examples are outlined:

Seek opportunities for learning beyond the actual event

Near misses, close calls, anonymised reporting systems and sensitivity to weak signals from everyday work all provide opportunity for learning and continuous improvement.

Avoid searching for blame

Focusing on individual failure and blame creates a culture of concealment and reduces the likelihood that the underlying contributor factors related to events will be identified.

Recognising that adverse events in complex systems are nearly always systemic

Serious adverse events can only be understood in terms of the overall socio-technical system in which the event occurred. That means understanding and being open to the possibility of a need for change in any of the components of the system. Investigating why the barriers the organisation thought it had in place were not effective in preventing the event, can bring a lot of insight and learning about systemic issues.

Recognise the difference between ‘work as imagined’ and ‘work as actually done’

Investigators and learners must be sensitive to the fact that ‘work-as-done’ often diverges significantly from how work is documented in formal job procedures, disclosed or prescribed. The goal of learning is to improve ‘work-as-done’ and then seek to better align how this is more accurately described and represented in formal work procedures.

Understanding the situation and the context in which performance occurs

Generally speaking, situational factors are largely ‘factual’ and are connected to the circumstances of the incident (before, during and after) in terms of location, space and time. Whereas Contextual factors are mainly focused on the perceptions, beliefs, intentions and values of those involved and the meaning that they formulate and assign to the specific incident being analysed. These behaviour or motivational factors are not always observable or obvious.

Do not confuse recommendations with solutions

Recommendations should set out what improvement is needed, without defining how that improvement is to be achieved. Solutions are concerned with satisfying recommendations in a way that is practical, effective and sustainable. Good recommendations allow opportunity for a range of solutions. Recommendations should be linked to system performance such that the reason for the change remains understood as the solution is developed and implemented

Accept that learning means changing

Lessons identified are not the same as are not the same as lessons learned. If nothing changes in terms of the way the people in the organisation think, behave or react to future events and situations, nothing has been learned. Though change, in itself, does not mean effective learning – change must be effective in implementing the intent of recommendations, must be understood and accepted by those affected by it, and must be embedded so it is sustained.

For health and social care educators, organisations and policy leaders the challenge will be to determine to what extent our current practices are informed by these statements of good practice and what needs to change.

• Where are the gaps in what we do?

• How and where can we implement these learning principles across health and social care?

• Who needs to be upskilled and at to what extent?

The Scottish Government’s Openness and Learning agenda offers a good place to start in exploring the relevance and transferability of these learning principles at all levels of health and social care practice and education. For example, a key element of this agenda is to improve the processes and learning related to Team Based Quality Reviews (TBQRs; previously termed hospital M&Ms or primary care significant event analysis meetings). These reviews serve as the core mechanism for facilitating the process of reporting of adverse events, near misses and learning from everyday practice within the framework of organisational governance and learning. They provide a floor for seeking multiple perspectives (at all levels from frontline clinical teams to the board members) and reviewing cases using a systems approach to identify existing weaknesses for the purpose of collective learning and improvement. TBQRs also serves as a platforms where existing output from relevant workstreams or national registries can be shared with the teams that will be able to action them within their context.

To access a copy of the CIEHF White Paper please visit: events.ergonomics.org.uk/event/learning-from-adverse-events

If you’re interested in helping us to translate these principles to health and social care education and practice, please get in touch:

Prof Paul Bowie is NES Programme Director (Safety & Improvement) and the CIEHF Healthcare Special Interest Group Lead (Patient Safety). He contributed to the content of the CIEHF White Paper. Email: paul.bowie@nes.scot.nhs.uk Twitter: @pbnes

Mr Manoj Kumar is Consultant Surgeon, NHS Grampian and Clinical Lead of the Scottish Mortality and Morbidity Programme. Email: m.kumar3@nhs.net Twitter: @Manoj_K_Kumar

Is the "never event" concept a useful safety management strategy in complex care systems?

Why is the area important?

A sub-group of rare but serious patient safety incidents, known as ‘never events’, is judged to be ‘avoidable’. There is growing interest in this concept in international care settings, including UK primary care. However, issues have been raised regarding the well-intentioned coupling of ‘preventable harm’ with zero tolerance ‘never events’, especially around the lack of evidence for such harm ever being totally preventable.

What is already known and gaps in knowledge?

We consider whether the ideal of reducing preventable harm to ‘never’ is better for patient safety than, for example, the goal of managing risk materialising into harm to ‘as low as reasonably practicable’ which is well-established in other complex socio-technical systems and is demonstrably achievable. We reflect on the ‘never event’ concept in the primary care context specifically, although the issues and the polarised opinion highlighted are widely applicable. Recent developments to validate primary care ‘never event’ lists are summarised and alternative safety management strategies considered e.g. Safety-I and Safety-II

Future areas for advancing research and practice

Despite their rarity, if there is to be a policy focus on ‘never events’, then specialist training for key workforce members is necessary to enable examination of the complex system interactions and design issues which contribute to such events. The ‘never event’ term is well intentioned but largely aspirational – however it is important to question prevailing assumptions about how patient safety can be understood and improved by offering alternative ways of thinking about related complexities.

Read the full article coming soon to the International Journal of Healthcare Quality

Development and application of ‘systems thinking’ principles for quality improvement

‘Systems thinking’ is often recommended in healthcare to support quality and safety activities but a shared understanding of this concept and purposeful guidance on its application are limited. Healthcare systems have been described as complex where human adaptation to localised circumstances is often necessary to achieve success. Principles for managing and improving system safety developed by the European Organisation for the Safety of Air Navigation (EUROCONTROL; a European intergovernmental air navigation organisation) incorporate a ‘Safety-II systems approach’ to promote understanding of how safety may be achieved in complex work systems. We aimed to adapt and contextualise the core principles of this systems approach and demonstrate the application in a healthcare setting. The original EUROCONTROL principles were adapted using consensus-building methods with front-line staff and national safety leaders.

Six interrelated principles for healthcare were agreed. The foundation concept acknowledges that ‘most healthcare problems and solutions belong to the system’. Principle 1 outlines the need to seek multiple perspectives to understand system safety. Principle 2 prompts us to consider the influence of prevailing work conditions—demand, capacity, resources and constraints. Principle 3 stresses the importance of analysing interactions and work flow within the system. Principle 4 encourages us to attempt to understand why professional decisions made sense at the time and principle 5 prompts us to explore everyday work including the adjustments made to achieve success in changing system conditions. A case study is used to demonstrate the application in an analysis of a system and in the subsequent improvement intervention design.

Application of the adapted principles underpins, and is characteristic of, a holistic systems approach and may aid care team and organisational system understanding and improvement

The full article can be accessed here: bmjopenquality.bmj.com/content/9/1/e000714.full