Regulatory implications of AI in medical practice

Credit: Nahrizul Kadri, Unsplash

It’s hard to escape all the speculation on how AI will fundamentally change the world as we know it.

As the Professional Standard Authority’s Safer Care for all report notes, innovative Artificial Intelligence (AI) systems within healthcare may significantly transform the role and responsibilities of healthcare professionals. Furthermore, it points to the fact that the pace and breadth of such transformation requires vigilance and monitoring from regulators to safeguard patient safety and support clinicians with understanding the boundaries of liability and accountability.

With these significant new risks and challenges in mind, the GMC has conducted a programme of research to better understand how AI is being used on the ground so that they, and the wider healthcare system, can ultimately support doctors working with AI in the future.

In 2023–24, the GMC supported The Alan Turing Institute to survey doctors on their perceptions of AI. This research showed that over a quarter of respondents had used some form of AI in their practice in the last 12 months. Most respondents had positive perceptions of AI, and those who used AI were more likely to view it favourably. The research also showed doctors were uncertain about the risks and their professional responsibilities when using AI and felt they hadn’t received enough training in this area.

To explore the themes from the survey further, the GMC commissioned Community Research to conduct follow-up interviews with some of the survey respondents who used AI in their practice. This research explored doctors’ individual experiences of generative AI and diagnostic and decision support systems in more depth than was possible in the survey. In total, 17 interviews were conducted with doctors from a range of specialities. All bar one of the doctors interviewed was still using AI in their practice.

These doctors explained that there were largely confident that they could manage and mitigate the risks of using AI in their practice and that they understood the professional responsibilities associated with using AI. However, they also highlighted that there needs to be further education and training on how to use AI and the ethical and security considerations associated with developing AI systems using patient data sets. Furthermore, they believed that regulators have an important role to play, including clarifying the responsibilities and risks associated with AI systems. 

The full report is available here, together with an interesting blog by Dr Alan Abraham, one of the GMC’s clinical fellows where he discusses his experiences of how AI technologies might impact the future of patient care.

Lucy Lea