Chatbots like ChatGPT might be as good as physicians when it comes to giving ophthalmology patients advice, according to an Aug. 25 report from Medscape.
A study published in an Aug. 22 edition of JAMA Network Open found that eight trained ophthalmologists were able to differentiate medical advice written by humans from advice written by chatbots with an accuracy of 61.3 percent.
The study, which looked at responses to 200 different eye care questions, concluded that chatbots might be just as effective at giving medical advice as trained physicians.
The likelihood of a chatbot response containing incorrect or inappropriate material was similar to human responses, with 77.4 percent of chatbot responses containing no incorrect information, compared to 75.4 percent for human responses. Potential harm from answers was deemed unlikely in 86.5 percent of the chatbot answers and 84 percent of the human answers.
The chatbot was more prone to fabricated responses, according to the report. For example, when asked whether cataract surgery could "shrink" the eye, it replied that "removal of the cataract can cause a decrease in the size of the eye."
Previous studies of ChatGPT's use in ophthalmology have shown varying results, with a 2023 study from Atlanta-based Emory University showing that ChatGPT-4 could appropriately diagnose an ophthalmic patient 93 percent of the time, while researchers from Canada claim that ChatGPT only answered 46 percent of questions correct on an ophthalmology board certification test prep module.