Artificial intelligence provides “massive opportunities” for improving healthcare and reducing the burden on doctors, experts have said after conducting a study that found AI-generated responses to patients outperformed doctors’ replies.

The new research compared responses from doctors to those from ChatGPT, with a panel of healthcare professionals rating ChatGPT’s answers over doctors’ responses 79% of the time.

What is ChatGPT?

ChatGPT is an AI language model based on the Generative Pre-trained Transformer architecture. ChatGPT has been designed to understand and generate human-like text based on the input provided to it.

ChatGPT’s database is limited to information available until September 2021.

Could ChatGPT change the patient-doctor relationship?

Researchers those behind the study have said AI could “transform” the way doctors support their patients and relieve some of the burden on staff who, thanks to changes in work practices due to COVID-19, are subject to a “barrage” of electronic messages from patients.

Dr Christopher Longhurst, Chief Medical Officer and Chief Digital Officer at UC San Diego Health, said: “Our study is among the first to show how AI assistants can potentially solve real world healthcare delivery problems.

“These results suggest that tools like ChatGPT can efficiently draft high quality, personalised medical advice for review by clinicians, and we are beginning that process at UCSD Health.”

The team set out to test the following question: Can ChatGPT respond accurately to questi0ons patients send to their doctors?

They used Reddit’s AskDocs, a forum in which members can post medical questions and receive answers from verified healthcare professionals, exchanges which were viewed by the research team as a good reflection of real-world scenarios.

They randomly selected 195 exchanges on AskDocs and provided the original question to ChatGPT for a response. A panel of three licensed healthcare professionals then analysed both the doctor and ChatGPT responses, comparing the quality and empathy of each answer.

AI provides better quality responses than doctors

The research team found that 79% of the time, the assessors preferred the ChatGPT responses.

In addition, the number of good or very good quality answers were more than three times higher for ChatGPT responses compared to the doctors’ responses.

Dr John W. Ayers, vice chief of innovation in the UC San Diego School of Medicine Division of Infectious Disease and Global Public Health, said: “The opportunities for improving healthcare with AI are massive. AI-augmented care is the future of medicine.”

Study co-author Dr Davey Smith, a professor at the UC San Diego School of Medicine, added: “ChatGPT might be able to pass a medical licensing exam but directly answering patient questions accurately and empathetically is a different ballgame.”

Study co-author Dr Mark Dredze, the John C Malone Associate Professor of Computer Science at Johns Hopkins, also added: “We could use these technologies to train doctors in patient-centred communication, eliminate health disparities suffered by minority populations who often seek healthcare via messaging, build new medical safety systems, and assist doctors by delivering higher quality and more efficient care.”

Read the full study in JAMA Internal Medicine.

Get our free newsletters

Stay up to date with the latest news, research and breakthroughs.

You May Also Like

Top diabetes professor drafts risk assessment document for frontline COVID-19 staff

The health and wellbeing of frontline NHS staff has been prioritised among…

Twice daily dairy intakes could reduce type 2 diabetes risk

Eating cheese, yoghurt or eggs twice a day could help lower the…

Public Health England considers low carb approach for type 2 diabetes

The low carb approach is being considered by the government to be…