Overcoming clinical safety isolation in the AI age
With growing belief that AI will transform healthcare services, our Associate and Clinical Safety Officer, Zoe Leadbetter, identifies five key challenges in digital clinical safety and the strategies you can implement to overcome them.
Health technology is developing at pace, and there’s growing belief that the integration of artificial intelligence (AI) promises transformative potential for healthcare services. The upcoming 10 year plan to reform the NHS will inevitably talk about the need for widespread technological adoption, integration and innovation. But before we hang our hopes on technology ‘saving’ the NHS, we should pay attention to the lessons from the past, particularly in the context of clinical safety. Drawing on Faculty’s extensive experience, this blog identifies key challenges in digital clinical safety and proposes strategies to overcome them.
Context
There is no end of cases illustrating the risks posed by health technology products. This year the BBC reported that IT failures have been linked to the deaths of three patients and more than 100 instances of serious harm at NHS hospital trusts in England. While these incidents are thankfully rare, their occurrence emphasises the urgent need for comprehensive digital clinical safety assessments that keep pace with innovation, especially in health IT systems utilising AI.
Clinical safety standards in England are based within the Health and Social Care Act 2012 which sets out 2 key standards, DCB0129 and DCB0160, for both manufacturers and deploying healthcare organisations respectively. These standards set out a framework of activities, to proactively identify, evaluate and mitigate risks of patient harm prior to the live deployment of technology and monitor risk post deployment. These standards sorely need updating for the times and the minister for patient safety, Baroness Gillian Merron, has announced that DHSC will review these standards in 2024/5.
Unlike many other Health IT manufacturers, clinical safety activities are embedded within Faculty’s product lifecycle, led by Faculty’s in-house Clinical Safety Officers (CSOs) and clinical safety workstream leads. In our experience this approach goes beyond a tick-box exercise; it enables rich clinical safety assessments prioritising patient safety, whilst also reducing the overall cost to deliver a solution. We’re able to reduce the cost as a result of clinical safety being present throughout design and development, reducing the need for remedial work to make a product clinically safe after a product has been built.
Over the past few years our in-house Faculty CSOs have built up a wealth of experience in clinical safety. We’ve identified five key challenges, alongside our strategies to tackle them and deliver successful digital clinical safety:
Challenge #1: The ‘zero risk’ fallacy
It's all too easy to set unrealistic safety expectations.
It’s important to appreciate that while we can implement controls to reduce risk, achieving ‘zero risk’ is often unrealistic. Taking this approach to the extreme it’s conceivable to never initiate any new IT system on the grounds of avoiding the introduction of potential minor clinical safety risks.
We take a pragmatic approach to clinical safety assessments.
We are transparent with our assessments and recognise there will almost always be some level of residual risk. The key decision for the deploying healthcare organisation CSO, is whether the new solution introduces more risk than benefit compared to the current level of risk from the existing system.
Challenge #2: Lack of a UK standard for clinical safety assessments in AI products
The absence of a standardised framework for clinical safety assessments of AI products in the UK creates considerable uncertainty for manufacturers, healthcare organisations and the public.
For instance, there are no defined accuracy thresholds required of healthcare AI products. This leads to varying solution accuracy, making clinical safety assessments complex, heavily subjective and less transparent to the public.
As leading experts in applied AI, we know it’s unrealistic to expect generalised accuracy thresholds. However, a framework for AI evaluations, co-produced by technology suppliers, independent evaluators and adopting sites, aligns with NHS England's recent recommendations – lessons from the AI in Health and Care Award.
Without clear guidelines, ensuring that AI solutions meet safety standards becomes challenging and potentially undermines trust in AI solutions by making it difficult to create consistent clinical safety assessments.
Challenge #3: Agile innovation vs waterfall regulation
To foster innovation in Health IT Systems, the architecture of clinical safety regulation must keep pace and evolve.
Clinical safety assessments follow a waterfall approach and assume a well defined and stable scope. On the other hand, healthcare IT products are typically developed using an agile approach, characterised by iterative development and continuously evolving scope. This makes embedding clinical safety into the development teams thought processes all the more important.
We welcome the upcoming DHSC review of clinical safety standards and are looking forward to contributing to the consultation, especially on improving how AI and iterative development can be addressed.
Challenge #4: Conflicting motivations can lead to organisational friction
The driving motivation for CSOs is to promote patient safety through clinical safety compliance. Development teams want to make a solution customers love, within the shortest time frame. It’s not uncommon for these two motivations to be in friction with one another.
We find the key to resolution is a combination of pragmatism and collaboration on top of a relentless safety first approach.
Challenge #5: Digital clinical safety is a rapidly evolving specialism, marked by wide variation in knowledge, experience and application practices
Digital Clinical Safety is still a relatively young process with a small, yet growing, cohort of champions. Despite the standard being legally mandated within England, we’ve experienced huge variation in how NHS trusts apply the standards.
NHS organisations vary from lacking a CSO and clinical safety awareness, to having proactive CSO teams integrated within Digital Transformation functions. It's not uncommon that we upskill NHS organisations on why they need clinical safety and what their responsibilities are under the law.
We know that NHS organisations all want to be compliant with clinical safety but the main issue is the under-resourcing of CSO time.
This needs to be combatted through wider awareness of the risks of new IT products, the requirements under the law and the resources available to trusts.
The NHSE implementation guidance is a great source of information we often refer back to when we encounter novel challenges. For example, recently we used this guidance to steer our approach to distinct types of third parties and how to include them within our clinical safety assessments.
Key takeaway
You can overcome CSO isolation and organisational misalignments through collaboration.
Historically the digital clinical safety team is distinct from patient safety teams. A CSO requires a fairly unique set of skills and experiences across clinical and technical areas, so the role can be isolating and limits potential for collaboration and shared problem solving that would benefit patient safety.
At Faculty we recognise that patient safety is a team sport. We address CSO isolation and the other challenges listed above through intentional collaboration.
We have a group of qualified CSOs so there’s never an instance of a risk assessment conducted by a single CSO. This allows us to discuss risks, scores and controls as a group, where constructive challenge is the norm and actively encouraged.
This makes our hazard scoring more robust and provides the confidence we need to push back against internal development timelines or ideate pragmatic solutions.
We work closely with our development teams so that clinical safety is integrated into project plans from the beginning. On larger projects we nominate clinical safety workstream leads who are not CSOs, but champion clinical safety from inside the project team.
We actively collaborate beyond our organisational boundaries by collaborating openly with Trust CSOs, attending NHSE CPD days and attending the informal monthly CSO coffee and chat group. This group, established and led by Kaye Reynolds, Lead Digital Health CSO based at The Queen Elizabeth Hospital King's Lynn NHS, is a fantastic community of CSOs from the public and private sector.
Everyone shares the common goal of achieving a safe solution. Open and honest dialogue between CSOs in manufacturing and deployment organisations fosters collaboration, accelerates win-win outcomes, and ensures healthcare organisations gain maximum value from the solution.
If you want to talk about any of the above, get in touch by contacting clinicalsafety@faculty.ai to reach out to a member of our Clinical Safety Team – Matt, Zoe, Mecaela, Reneé and Katie.