FinTech

Study Raises Doubts on ChatGPT's Expertise in Medical Queries

Published December 10, 2023

Insight into the reliability of artificial intelligence-powered tools in providing medical answers has gained new grounds following a study conducted by researchers. According to the findings from Long Island University, AI does not seem to be foolproof when it comes to handling medical questions, particularly those related to medications.

Study Context and Methodology

A recent investigation spearheaded by researchers at Long Island University has put ChatGPT, a widely recognized AI chatbot, under scrutiny for its potential application in the medical field. With a series of 39 real-world medication-related queries sourced from the university’s College of Pharmacy, the free version of ChatGPT was put to the test. The questions represented a diverse range of drug-related concerns typically encountered by pharmacists and health professionals.

Findings and Implications

The results of the study highlighted a substantial limitation in ChatGPT's ability to provide accurate and reliable medical information. The research outcomes underscore the gaps that exist within artificial intelligence's understanding of complex medical questions, bringing to light the risk of misinformation should AI be solely relied upon in critical health-related contexts. While the technology shows promise in various sectors, the research indicates caution in its application within the healthcare industry.

Investors and stakeholders in AI and healthcare technologies, notably represented by specific stock tickers related to these companies, are closely watching developments like these. Innovations in AI and their implications on healthcare practices have significant bearings on market perceptions and could influence the future direction of investments in the HEALTHTECH and AI sectors.

AI, Healthcare, Study