If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
From imaging interpretation and health monitoring to drug development, the role of artificial intelligence (AI) in medicine has increased. But AI is not ready to replace humans when it comes to the diagnosis of sports medicine conditions. Rather, in highly specialized fields such as sports medicine, when it comes to interpretation of diagnostic studies such as magnetic resonance imaging scans (that are more sophisticated than simple radiographs), experts outperform AI systems at present. Key features of clinical practice, such as the physical examination, in-person consultation, and ultimately, decision making, cannot be easily replaced. As every novel “smart” tool is incorporated into our lives, we need to be ready to embrace its use, but we also ought to be critical of its implementation and seek transparency at every step of the process. We cannot afford to see AI as an antagonistic element in our practices but rather as a valuable assistant that could someday improve diagnostic accuracy.
Artificial intelligence (AI) is already here and part of our lives: smartphones with face and speech recognition, computer software that blocks spam email and recommends shopping items, and social media applications that hierarchize news and advertisements (Fig 1). AI’s role in medicine has also increased. From imaging interpretation and health monitoring to drug development, numerous AI applications have been directly or indirectly used in health care.
In their study “Diagnostic Performance of Artificial Intelligence for Detection of Anterior Cruciate Ligament and Meniscus Tears: A Systematic Review,” Kunze, Rossi, White, Karhade, Deng, Williams, and Chahla
systematically reviewed 11 studies that evaluated the performance of AI models in diagnosing anterior cruciate ligament tears or meniscus tears. They found that AI models can be accurate in detecting anterior cruciate ligament and meniscus tears. It is interesting to note that all the studies that compared AI with clinical experts found that AI had either lower diagnostic accuracy or no difference in diagnostic accuracy compared with humans. Similar studies in fractures found that AI tools can have higher efficiency in fracture detection than clinicians.
It appears that in highly specialized fields, such as sports medicine, with diagnostic tools such as magnetic resonance imaging scans that are more sophisticated than simple radiographs, experts overperform AI systems at present.
The key question is whether AI is ready to take over a part of the diagnostic process. The data so far have shown promise, but clearly, additional studies are necessary to show better accuracy with AI models. The principles of evidence-based medicine need to guide our decisions about incorporating AI technology. Studies with a high level of evidence, elimination of bias, and long-term outcomes are necessary for AI applications, as for every novel application that may change our clinical practice.
We need to resist our enthusiasm for these novel tools until we have robust, long-term, and replicated data.
An additional key question is about clinical safety. AI systems have several limitations and obstacles. Some concerns have been raised regarding AI algorithms, such as IBM Watson Health’s cancer AI algorithm, which was allegedly based on limited real patient data and provided questionable treatment recommendations. These problems highlight the need for complete transparency and independent AI research with a high level of evidence prior to use in clinical practice. The recent example of the loud collapse of Theranos, which allegedly could perform series of blood tests with a very small amount of blood, should remind us of the value of actual data, patience, and a constant quest for validation. Last but not least, the use of AI applications in health care involves a major risk to health data privacy and security.
As we move forward, the decision is clear. The recent example of the COVID-19 (coronavirus disease 2019) pandemic that led to a large-scale adaptation of technologies to assist in care, such as teleconferences, virtual visits, and telemedicine, showed us that we could improve our practice by incorporating technology. It also highlighted that some key features of our practice, such as the clinical examination, in-person contact, and ultimately, decision making, cannot be easily replaced. As every novel “smart” tool is incorporated into our lives, we need to be ready to embrace its use, but we also ought to be critical of its implementation and seek transparency in every step of the process. We cannot afford to see AI as an antagonistic element in our practices but rather as a valuable assistant that could significantly improve diagnostic accuracy, once it meets qualitative validation.
The author reports the following potential conflicts of interest or sources of funding: N.K.P. receives research personal fees as Arthroscopy Associate Editor from the Arthroscopy Association of North America, outside the submitted work. Full ICMJE author disclosure forms are available for this article online, as supplementary material.