The future of AI: Enhancing patient safety
10-01-2024

The future of AI: Enhancing patient safety

Generative artificial intelligence (Gen AI) is known for its innovative and realistic outputs based on billions of data points. It’s a powerful tool in many sectors that can also assist in daily tasks such as online shopping or creating content.

The applications of Gen AI in healthcare are just beginning to be realized. The technology has the potential to revolutionize medical imaging, predict disease progression for individual patients, and even assist in discovering new vaccines.

Gen AI and patient safety

A recent study conducted by researchers at the BU Chobanian & Avedisian School of Medicine has produced some remarkable results.

The experts subjected an advanced publicly available Gen AI model, known as GPT-4, to a rigorous 50-question self-assessment from the Certified Professional in Patient Safety (CPPS) exam. This is a standardized multiple-choice certification exam for professionals.

Impressively, GPT-4 demonstrated a high level of performance, answering 88% of the questions correctly.

Gen AI performance in the real world

Study co-author Dr. Nicholas Cordella is an assistant professor of medicine at BU Chobanian & Avedisian School of Medicine.

“While other studies have looked at genAI’s performance on exams from different healthcare specialties over the past year, ours is the first robust test of its proficiency specifically in patient safety,” said Dr. Cordella.

GPT-4’s impressive performance was achieved without any additional technical training or medical fine-tuning.

The AI model performed notably well in the domains of Patient Safety and Solutions, Measuring and Improving Performance, and Systems Thinking and Design/Human Factors.

These results prompted the team to encourage further testing of AI’s real-world strengths and weaknesses.

A glimpse into the future

The study shows considerable promise for the integration of AI into healthcare safety systems.

The research hints at AI’s potential to assist doctors in recognizing, addressing, and preventing mistakes in hospitals and clinics.

“While more research is needed to fully understand what current AI can do in patient safety, this study shows that AI has some potential to improve healthcare by assisting clinicians in addressing preventable harms,” said Dr. Cordella.

Addressing the issue of medical errors

The researchers also explored how AI might help tackle the serious problem of medical errors, which dramatically affect healthcare outcomes and are believed to cause around 400,000 deaths every year.

However, it’s crucial to be aware of the limitations of current AI technology. Vigilance must be maintained against biases, false confidence, and the potential for fabricated data, particularly with large language models like GPT-4.

“AI has the potential to significantly enhance patient safety, marking an enabling step towards leveraging this technology to reduce preventable harms and achieve better healthcare outcomes,” said Dr. Cordella.

“However, it’s important to recognize this as an initial step, and we must rigorously test and refine AI applications to truly benefit patient care.”

Exciting opportunities ahead

As artificial intelligence continues to advance, its integration into healthcare systems presents both exciting opportunities and significant challenges.

While AI models like GPT-4 have demonstrated impressive capabilities, such as their potential to improve patient safety, there are hurdles that must be addressed before AI can become a reliable component in clinical settings.

Ensuring that AI systems operate with transparency and accuracy is paramount, especially in high-stakes environments where errors can have severe consequences.

Overcoming significant challenges

One major challenge is the inherent bias present in AI models. Since these systems are trained on vast amounts of data, they can inadvertently replicate the biases found in their datasets.

In healthcare, where decisions impact patient outcomes, such biases can lead to unequal treatment or unintended harm.

Additionally, AI’s tendency to produce “hallucinations” or generate plausible but incorrect information poses a risk that must be mitigated through rigorous testing and oversight.

However, with ongoing research and development, AI’s ability to support healthcare professionals could evolve rapidly.

By identifying patterns in data that humans might overlook, AI can assist in diagnosing complex conditions, personalizing treatment plans, and predicting potential complications.

As the technology evolves, AI could play a key role in minimizing preventable errors and enhancing the overall quality of care.

The study is published in the journal Joint Commission Journal on Quality and Patient Safety.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe