Aaos Orthoinfo Provides More Accessible Information Regarding Rotator Cuff Surgery Than Chatgpt

Aaos Orthoinfo Provides More Accessible Information Regarding Rotator Cuff Surgery Than Chatgpt

Catherine Hand, BS, UNITED STATES Camden Bohn, BA, UNITED STATES Shadia Tannir, BA, UNITED STATES Yining Lu, MD, UNITED STATES Erick Marigi, MD, UNITED STATES Josh Chang, BS, UNITED STATES Daanish Khazi-Syed, BS, UNITED STATES Brian Forsythe, MD, UNITED STATES

RUSH University Medical Center, Chicago , IL , UNITED STATES


2025 Congress   ePoster Presentation   2025 Congress   Not yet rated

 

Anatomic Location

Treatment / Technique


Summary: ChatGPT's information on rotator cuff surgery is less accessible, requiring a higher education level for understanding compared to the more easily readable content from AAOS OrthoInfo, highlighting the need for improved readability in AI-driven health resources.


Background

Artificial intelligence (AI) is increasingly used in healthcare, providing patients with new ways to access health information. ChatGPT, an AI-based platform, shows potential as a resource for patients seeking details about their conditions. This study assesses the readability of information regarding rotator cuff surgery from ChatGPT and compares it to that from the American Academy of Orthopaedic Surgeons (AAOS).

Methods

Key questions derived from the AAOS OrthoInfo page on rotator cuff tears were used, covering topics like the rotator cuff’s function, surgical options, causes of injuries, indications for seeing a doctor, non-surgical healing, and criteria for surgery (refer to Table 1). ChatGPT was used to generate responses to these questions. The readability of the content from AAOS OrthoInfo and ChatGPT was evaluated using: the Flesch-Kincaid Reading Ease Index, Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, Simple Measure of Gobbledygook (SMOG) Index, FORCAST Readability Formula, Fry Readability Graph, and Raygor Readability Graph. Two-tailed T-tests to compare the mean readability scores, with a p-value of less than 0.01 considered statistically significant.

Results

The AAOS OrthoInfo material had an average reading grade level of 11.9 and a reading ease score of 52.5. In contrast, ChatGPT’s content had a higher average reading grade level of 14.7 and a reading ease score of 25.9, indicating the need for undergraduate education for comprehension. Overall, the reading ease between ChatGPT and OrthoInfo was statistically significant (25.9 versus 52.5 respectively, p < 0.0001) (Figure 2). The average reading grade level between ChatGPT and OrthoInfo was also significant (14.7 versus 11.9 respectively, p < 0.01) (Figure 1).

Conclusions

ChatGPT’s information on rotator cuff surgery requires a higher education level for understanding compared to AAOS OrthoInfo. Thus, OrthoInfo currently provides more accessible information. AI tool developers can improve readability and address health literacy gaps.