Summary
Three deep learning computer vision algorithms were successfully harnessed to efficiently and accurately read and classify a cohort of 20,836 knee radiographs in patients with ACL injuries based on laterality, projection, presence and type of implant, and degree of osteoarthritis using Kellgren-Lawrence grading— demonstrating AI’s utility in creating detailed large-scale radiographic registries.
Abstract
Background
Developing large-scale, standardized radiographic registries in patients with anterior cruciate ligament (ACL) injuries can greatly facilitate patient-centered care and predictive orthopedics. However, these efforts require significant resources and time in the absence of artificially intelligent tools. To overcome this limitation, we propose the deployment of Artificial Intelligence for Knee Imaging Registration and Analysis (AKIRA), a trio of deep learning computer vision algorithms, to automatically classify and annotate the myriad radiographs stored within institutional picture archiving and communication systems (PACS). We hypothesize that these computer vision algorithms can be developed to rapidly and accurately organize radiographs based on laterality and projection, identify implants, and classify arthritis grading.
Methods
A collection of 20,836 knee radiographs from all timepoints of treatment (mean orthopedic follow-up 70.7 months [IQR: 6.8-172 months]) were aggregated from 1628 patients (median age 26 [IQR: 19-42], 57% male) with ACL injuries. Three deep learning algorithms (EfficientNet, YOLO, and ResNet) were either developed or utilized. Radiograph laterality (left, right, bilateral) and projection (anteroposterior [AP], lateral, sunrise, posteroanterior [PA], hip knee ankle [HKA], Camp-Coventry intercondylar [notch]) were labeled by a previously developed deep learning classification model. Two independent reviewers manually labeled the metal fixation implants on a subset of radiographs utilizing bounding boxes, which were used to develop and validate a deep learning object detection algorithm. The degree of osteoarthritis based on Kellgren-Lawrence grades on standing anteroposterior radiographs was classified using a previously developed deep learning classification algorithm. The laterality and projection classification model and Kellgren-Lawrence classification model were evaluated on a subset of 371 radiographs to ensure the quality of the classifications. Upon successful validation, AKIRA was applied to all radiographs to establish a standardized radiographic registry.
Results
Validation of the classification algorithms yielded excellent discriminative performance when classifying radiographic laterality (F1 score: 0.962-0.975). On radiographic projection, the algorithms achieved perfect discrimination (F1 score: 1.0) on recognition of lateral and sunrise views, while discrimination of notch, posteroanterior (PA), and anteroposterior (AP) views produced F1 scores of 0.941, 0.970, and 0.950, respectively. With respect to object detection, the algorithm achieved area under the precision-recall curve (AUPRC) of 0.981, 0.983, 0.695, 0.935, 0.991, and 0.992 when identifying femoral metal screw, femoral metal button, other metal fixation on the femur, tibial metal screw, tibial metal button, and other metal fixation on the tibia, respectively. The Kellgren-Lawrence classifier reached a concordance of 0.39-0.40, but when labels were binarized to “arthritis” and “no arthritis”, the concordance increased to 0.81-0.82. Sequential deployment of AKIRA following internal validation processed and labeled all 20,836 images with the appropriate views, implants, and the presence of arthritis within 88 minutes.
Conclusion
We successfully leveraged a collection of computer vision algorithms known as AKIRA to automate classification and object detection in a large cohort of radiographs in patients with ACL injuries, ultimately establishing an AI-enabled radiographic registry with information on laterality, radiographic projection, fixation implant, and degree of arthritis.