AI in healthcare: what are the risks for for the NHS?

At a time when there are more than seven million patients on the NHS waiting list in England and around 100,000 staff vacancies, artificial intelligence could revolutionise the health service by improving patient care and freeing up staff time.
Its uses are varied ā from spotting risk factors in a bid to help prevent chronic conditions such as heart attacks, strokes and diabetes ā to assisting clinicians by analysing scans and x-rays to speed up diagnosis.
The technology is also maximising productivity by carrying out routine administrative tasks from automated voice assistants to scheduling appointments and capturing doctorsā consultation notes.
āTransformativeā
Generative AI ā a type of artificial intelligence that can produce various types of content, including text and images ā will be transformative for patient outcomes, according to Sir John Bell, a senior government advisor on life sciences.
Sir John is president of the Ellinson Institute of Technology in Oxford ā a major new research and development facility investigating global issues, including the use of AI in healthcare.
He says generative AI will improve the accuracy of diagnostic scans and generate forecasts of patient outcomes under different medical interventions, leading to more informed, personalised treatment decisions.
But he warns researchers should not work in isolation, instead innovation should be shared fairly around the country to avoid some communities missing out.
āTo achieve these benefits the NHS must unlock the enormous value currently trapped within data silos, to do good while safeguarding against harm,ā Sir John says.
āAllowing AI access to all the data, within safe and secure research environments, will improve the representativeness, accuracy and equality of AI tools to benefit all walks of society, reducing the financial and economic burden of running a world-leading National Health Service and leading to a healthier nation.ā

āMitigate risksā
AIĀ opensĀ upĀ aĀ worldĀ ofĀ possibilities,Ā butĀ itĀ bringsĀ risksĀ and challenges too, likeĀ maintainingĀ accuracy.Ā Results still need to be verified by trained staff.
The government is currently evaluating generative AI for use in the NHS ā one issue is that it can sometimes āhallucinateā and generate content that is not substantiated.
DrĀ CarolineĀ Green,Ā fromĀ theĀ InstituteĀ forĀ EthicsĀ inĀ AIĀ atĀ theĀ UniversityĀ ofĀ Oxford, is aware of some health and care staff using models like ChatGPT to search for advice.
āIt is important that people using these tools are properly trained in doing so, meaning they understand and know how to mitigate risks from technological limitationsā¦ such as the possibility for wrong information being given,ā she says.
She feels it is important to engage people working in health and social care, patients and other organisations early in the development of generative AI and to keep on assessing any impacts with them to build trust.
Dr Green says some patients have decided to deregister from their GPs over the fear of how AI may be used in their healthcare and how their private information may be shared.
āThis of course means that these individuals may not receive the healthcare they may need in the future and fall through the cracks,ā she says.

Then there is the risk of bias. AI models may be trained on datasets that might not reflect the populations they will be applied to, exacerbating health inequalities based on things like gender or ethnicity.
Therefore,Ā regulationĀ isĀ key.Ā It needs to keep patients safe and protect their personal data, whilst at the same time increasing capacity to keep up with developments and allow AI to evolve and learn on the job.
AI-powered medical devices are tightly regulated by the The Medicines and Healthcare products Regulatory Agency (MHRA).
The Health Foundation think tank recently published a six-point national strategy to ensure AI tools are rolled out fairly and regulation is updated.
NellĀ Thornton,Ā a senior improvement analyst at HealthĀ Foundation, says: āThere are so many of these models coming through the system that itās difficult to assess them quickly enough.
āThatās where we need support around the capacity of the system to regulate these things and we also need some clarity on some of the challenges that will come from the quirkiness of generative AI systems and what additional regulation they might need.ā
Dr Paul Campbell, MHRA Head of Software and AI, says: āAs a regulator, we must balance appropriate oversight to protect patient safety with the agility needed to respond to the particular challenges presented by these products to ensure we continue to be an enabler for innovation.ā
The Department of Health and Social Care says the new Labour government will āharness the power of AIā by purchasing new AI-enabled scanners to diagnose patients earlier and treat them faster.
While few can deny the transformative effect AI is having within healthcare, there are challenges to overcome, not least that NHS staff need the confidence to use it and patients must be able to trust it.
Follow BBC South on Facebook, X (Twitter), or Instagram. Send your story ideas to south.newsonline@bbc.co.uk or via WhatsApp on 0808 100 2240.