This post discusses the current regulation of AI/ML enabled medical devices in the UK and compares potential policy changes with the US.
The emergence of Artificial Intelligence (AI) and Machine Learning (ML) systems in medical devices has been linked to improved healthcare – for example, by providing earlier and more accurate diagnoses to patients, automating and simplifying hospital work, and drawing insights from the large amount of data generated from the delivery of healthcare. However, the nature of AI/ML-enabled medical devices or Artificial Intelligence as a Medical Device (AIaMD) also creates novel challenges. Generally, AIaMDs differ from other SaMDs and medical technologies in a number of ways. First, they are capable of learning from feedback and improving performance. Second, they have the potential to become ubiquitous in medical interactions; for example, as aiding diagnosis and treatment recommendations. And third, it can be difficult (or impossible) to reverse-engineer the way they reach their recommendations.
Following the Medicines and Medical Devices Act 2021, the MHRA launched a consultation on the future of medical device regulations with a specific chapter on Software as a Medical Device (SaMD) and AIaMD. In a previous post, we discussed SaMDs and the concomitant regulatory uncertainties in relation to UK and EU law. In this post, with the Government response to the consultation now published and new regulations expected shortly, we review some of the challenges presented by AI/ML. In particular, we look at the current approach in the UK, contrast this with how the United States (US) has approached AIaMDs, and think about what future UK regulatory approaches might look like.
Policy discussion in the UK and EU
AIaMDs continue to be regulated in the UK by the same review processes as general SaMDs under the Medical Devices Regulations 2002. Whilst the new EU Regulations on Medical Devices and In Vitro Medical Devices update some requirements for SaMD as far as Northern Ireland is concerned (as set out in earlier blog posts on SaMD and the Changing Face of Medical Devices Regulations, as per the Northern Ireland Protocol, NI is still falls within the ambit of the two EU Regulations), they do not deal specifically with AI/ML.
Whilst there is no general UK legislation vis-à-vis AI, the EU has released draft AI Regulations which (will) apply to AI/ML systems generally. The draft Regulation, in addition to prohibiting specific applications of AI/ML, utilises a risk classification system to determine the requirements for safety, evaluation, and the obligations of providers. This is similar to other product legislation, including the Medical Device Regulations. AI that is a component of or itself a product covered by the Medical Devices Regulations is, according to current Article 6 and Annex II of the draft Regulation, classified ‘high risk’ and subject to stricter requirements and obligations. It also appears, through a confusingly worded Article 24, that AI, which is a component of or is itself a medical device within the meaning of the Medical Devices Regulations, will be subject to both the draft AI Regulation and the EU MDR and IVDR.
Given the UK’s post-Brexit status, the draft EU AI Regulation will not be applicable generally in Great Britain, whilst the potential application to Northern Ireland is unclear. Article 13.4 of the Northern Ireland protocol provides for situations where the EU introduces a new law that falls within the scope of the protocol, but which does not replace or amend an existing law listed in the protocol. In such circumstances, the Joint Committee will take a decision on whether to add the new law to the appropriate annex to include it as retained law in Northern Ireland. It is possible that the AI regulation may be seen as falling within this scope, particularly since, as noted above, AI that is part of or itself a medical device is to be subject to both the AI Regulation, as well as the relevant EU Medical Device Regulations. The latter of which are covered in the protocol. So, there is continuing uncertainty here as to the future regulation of such devices in Northern Ireland and its relationship with the rest of the UK, exacerbated by the Governments latest promises to renegotiate the protocol itself.
Meanwhile, and prior to the announced consultation on the future of medical devices, the UK Medicines and Healthcare products Regulatory Agency (MHRA) launched a ‘Software and AI as a Medical Device Change Programme’ dedicated to providing a regulatory framework and guidance to ensure sufficient protection of patients and the public. The programme inter alia consists of three work packages concerning AIaMD. Project AI Rigour is concerned with utilising existing frameworks and developing supplementary ones to ensure that AIaMDs are safe and effective. Project Glass Box (AI Interpretability) is focused on being the antidote to the opacity in AI. In other words, it is dedicated to developing frameworks ensuring that AI models are ‘sufficiently transparent’, safe, and effective. Finally, Project Ship of Theseus is committed to assessing the adaptability of AIaMD with the current laws in the UK. It remains to be seen what the outcomes of these work packages will be, but the commencement of the consultation on the future of medical devices provided a limited insight of what we might expect in terms of regulation in the future.
The Government’s response to the MHRA consultations states there is no intention to adopt any specific legislation for AIaMDs beyond the rules applicable to SaMDs more generally. However, it does intend to align its medical devices classification rules with those of the International Medical Device Regulators Forum SaMD for general medical devices (excluding IVDs). Further, in the case of SaMDs where the risk profile is unclear, it intends to adopt an ‘airlock classification’ that would provide for interim access to the UK market, but would involve ‘monitoring and restricting the SaMD as if it were a high-risk product’. The Government’s response also indicated that SaMD products are also likely to be subject to additional requirements in terms of safety and performance akin to the General Safety and Performance Requirements (“GSPR”) in the EU Medical Device Regulation, Annex I, para 17.
The MHRA may also introduce a ‘Predetermined Change Control Plan’, similar to that introduced by the US Food and Drug Administration (FDA) and outlined further below. This would be on a voluntary basis at first (with a potential mandate in the future) to develop a robust post-market surveillance system for SaMDs. Arguably in line with their expressed commitment to incorporating best practice internationally, what is proposed includes different elements found in EU law, international standards, and pertinently for this post, approaches in the US. As such, it is to these approaches that we now turn.
The US Approach
The FDA has granted marketing authorisations for various AI-based medical devices, including an AI diagnostic system for eye disease diabetic retinopathy and AI-based clinical decision support software for alerting patients regarding a potential stroke. As there is no separate review process for AIaMDs, if they meet the definition of ‘medical device’ in accordance with s201(h) of the Federal Food Drug and Cosmetic Act, they are evaluated based on their risks with respect to intended clinical use for further classification. The AIaMDs that have received approval from the FDA have typically only included ‘locked’ algorithms – algorithms that do not evolve over time and do not use new data to alter their performance, and instead may be modified by the manufacturer at intervals. Such algorithms do not present the same difficulty in auditing and evaluating their safety as ’adaptive’ or ’unlocked’ algorithms, which learn from new user data through use and feedback. Since adaptive algorithms are continuously learning, they may provide outputs that are different from the ones that were initially cleared by the review process. Given the unique challenges presented by adaptive algorithms, the existing review processes are not adequate.
In 2019, the FDA published a Discussion Paper on AI/ML-based SaMDs, describing a potential regulatory approach to premarket review for AIaMDs. Particularly, the paper proposes a total product lifecycle approach (TPLC), intended to allow the FDA’s regulatory oversight to embrace the iterative nature and adaptability of AIaMDs, while providing for patient safety. This approach is one developed and implemented on a voluntary basis in the FDA’s Digital Health Software Pre-Certification (Pre-Cert) Program, launched as a pilot in 2017 and highlighted in the Discussion Paper. In this regard, the FDA proposes to focus on an organisation’s good ML practices and premarket reviews for devices that need such reviews to establish reasonable safety. Furthermore, the TPLC approach includes the Predetermined Change Control Plan (PCCP), a version of which is currently being proposed by the MHRA as highlighted above, as part of premarket submissions. The FDA’s approach would cover: (1) the anticipated modifications or SaMD Pre-Specifications (aspects the manufacturer intends to change through learning) and (2) the associated Algorithm Change Protocol (the methodology being used to implement those changes that manages risks to patients). However, some have critiqued the FDA’s regulatory approach. .
First, these critics contend that the FDA needs to widen its scope from evaluating AIaMD as ‘products’ to assessing them as part of systems, ultimately maximising their safety and efficacy in healthcare. This is because most AIaMD products operate as part of a larger system that involves various kinds of human involvement – from healthcare teams inputting the data to physicians reacting to the AI recommendations. Therefore, AIaMDs may show variance in performance in testing environments and actual-practice settings. However, this would mean expanding the mandate of agencies like the FDA that are designed to regulate products as opposed to systems and may not be desirable. Secondly, with respect to a continuously ‘learning’ product, such as an adaptive AIaMD, it can be difficult to clearly indicate when the newly ‘learned’ functions would reach the clinical significance to require a new 510(k) notification (this is a premarket submission made to the FDA to demonstrate that a device is as safe and effective as any legally marketed device).
Thus, whilst the US has been quicker off the mark to address the challenges of AIaMD than either the UK or EU have been, the extent to which their approach should (or indeed can) be emulated in the UK remains to be seen. This sentiment was also reflected in responses to the MHRA Consultation on the issue of introducing the PCCP, with some highlighting that this scheme had only recently been introduced in the US.
A Continuing Uncertain Future?
The future of regulating AI/ML-based SaMDs remains undetermined. Much of the MHRA consultation and Government response signals some welcome updates to the regulation of SaMDs in general. However, like the development of AI/ML itself, there is much that remains unknown about the future UK approach. This is the case especially given that it looks like specific requirements (distinct from those for SaMD generally) will not be placed in legislation. The Consultation itself, as well as the Government Response, indicates that other measures, such as encouraging clinical performance evaluation methods and clarification of the definition of AIaMDs, are likely to be placed in guidance rather than in legislation.
While the UK is considering a PCCP approach like the FDA, which includes premarket submissions specifying anticipated modifications of the AIaMD and associated algorithmic change protocol, it is premature to assess its viability without further policy details. As we noted above, some critics have pointed out the limitations of regulating AIaMDs in the same way we regulate other products. They would prefer intervention at the systems level which could take account of cooperation and integration, including human collaboration and potential error. Nevertheless, this arguably more responsive approach does not look set to be considered any time soon in formal legislation.
Written by: Ashita Jain, Rachael Dickson, and Laura Downey
Funding: Work on this was generously supported by a Wellcome Trust Investigator Award in Humanities and Social Sciences 2019-2024 (Grant No: 212507/Z/18/Z) and a Research England QR Enhancing Research and Knowledge Exchange Funding Programme award.