2025 ISAKOS Biennial Congress ePoster
Aaos Orthoinfo Provides More Accessible Information Regarding Rotator Cuff Surgery Than Chatgpt
Catherine Hand, BS, San Antonio, TX UNITED STATES
Camden Bohn, BA UNITED STATES
Shadia Tannir, BA, Fort Wayne, IN UNITED STATES
Yining Lu, MD, Rochester, Minnesota UNITED STATES
Erick Marigi, MD, Jacksonville, Florida UNITED STATES
Josh Chang, BS, Chicago , IL UNITED STATES
Daanish Khazi-Syed, BS, Dallas, TX UNITED STATES
Brian Forsythe, MD, Chicago, IL UNITED STATES
RUSH University Medical Center, Chicago , IL , UNITED STATES
FDA Status Not Applicable
Summary
ChatGPT's information on rotator cuff surgery is less accessible, requiring a higher education level for understanding compared to the more easily readable content from AAOS OrthoInfo, highlighting the need for improved readability in AI-driven health resources.
Abstract
Background
Artificial intelligence (AI) is increasingly used in healthcare, providing patients with new ways to access health information. ChatGPT, an AI-based platform, shows potential as a resource for patients seeking details about their conditions. This study assesses the readability of information regarding rotator cuff surgery from ChatGPT and compares it to that from the American Academy of Orthopaedic Surgeons (AAOS).
Methods
Key questions derived from the AAOS OrthoInfo page on rotator cuff tears were used, covering topics like the rotator cuff’s function, surgical options, causes of injuries, indications for seeing a doctor, non-surgical healing, and criteria for surgery (refer to Table 1). ChatGPT was used to generate responses to these questions. The readability of the content from AAOS OrthoInfo and ChatGPT was evaluated using: the Flesch-Kincaid Reading Ease Index, Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, Simple Measure of Gobbledygook (SMOG) Index, FORCAST Readability Formula, Fry Readability Graph, and Raygor Readability Graph. Two-tailed T-tests to compare the mean readability scores, with a p-value of less than 0.01 considered statistically significant.
Results
The AAOS OrthoInfo material had an average reading grade level of 11.9 and a reading ease score of 52.5. In contrast, ChatGPT’s content had a higher average reading grade level of 14.7 and a reading ease score of 25.9, indicating the need for undergraduate education for comprehension. Overall, the reading ease between ChatGPT and OrthoInfo was statistically significant (25.9 versus 52.5 respectively, p < 0.0001) (Figure 2). The average reading grade level between ChatGPT and OrthoInfo was also significant (14.7 versus 11.9 respectively, p < 0.01) (Figure 1).
Conclusions
ChatGPT’s information on rotator cuff surgery requires a higher education level for understanding compared to AAOS OrthoInfo. Thus, OrthoInfo currently provides more accessible information. AI tool developers can improve readability and address health literacy gaps.