As is evident in a Letter to the Editor from Gilat and Cole, “How Will Artificial Intelligence Affect Scientific Writing, Reviewing and Editing? The Future is Here …”,
1
ChatGPT is now impacting the medical literature. As ChatGPT was used to write part of the letter, we can say with certainty that ChatGPT is now published in Arthroscopy.ChatGPT is an artificial intelligence (AI) chatbot tool. In other words, ChatGPT is a machine, a program, a robot, or technically, a large language model trained on enormous amounts of information from the Internet. It is able to respond to user prompts by answering questions, writing essays, poems, love letters, computer code, or business plans; it can solve problems, including math or physics; and more.
2
, New York Times
New chatbots can change the world. Can you trust them?.
New chatbots can change the world. Can you trust them?.
https://www.nytimes.com/2022/12/10/technology/ai-chat-bot-chatgpt.html?smid=nytcore-ios-share&referringSource=articleShare
Date accessed: January 11, 2023
3
, CNN Business News
New York City public schools ban access to AI tool that could help students cheat.
New York City public schools ban access to AI tool that could help students cheat.
https://www.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.html
Date accessed: January 11, 2023
4
“The bot doesn’t just search and summarize information that already exists. It creates new content, tailored to your request.”5
ChatGPT has been used to write parts of newspaper and magazine articles, testing readers ability to determine whether the writing was by the chatbot or a human,New York Times
Did a fourth grader write this? Or the new chatbot?.
Did a fourth grader write this? Or the new chatbot?.
https://www.nytimes.com/interactive/2022/12/26/upshot/chatgpt-child-essays.html
Date accessed: January 11, 2023
4
so it is no surprise that Editorial Board Member Ron Gilat and Journal Board of Trustees Member and AANA Past President Brian Cole have now used the bot to write part of their letter.1
Gilat and Cole caution that ChatGPT is subject to “major errors and biases,” reiterating lay press reports that the chatbot has a “misinformation problem,”
5
does not always tell the truth,New York Times
Did a fourth grader write this? Or the new chatbot?.
Did a fourth grader write this? Or the new chatbot?.
https://www.nytimes.com/interactive/2022/12/26/upshot/chatgpt-child-essays.html
Date accessed: January 11, 2023
5
could be “weaponized to spread disinformation,”New York Times
Did a fourth grader write this? Or the new chatbot?.
Did a fourth grader write this? Or the new chatbot?.
https://www.nytimes.com/interactive/2022/12/26/upshot/chatgpt-child-essays.html
Date accessed: January 11, 2023
6
and could be used to create “deepfakes.”7
Gilat and Cole further caution that, “As authors, we need to make sure we do not use these tools to compose any part of a scientific work until these tools are … validated … and … perfectly accurate … (and) limited to specific tasks that do not compromise the integrity and originality of the work, and be subjected to meticulous human supervision.” This is consistent with China’s internet regulator, the Cyberspace Administration of China, enforcing regulation of “what it calls “deep synthesis” technology, including AI-powered image, audio, and text-generation software [from] from spreading “fake news”, or information deemed disruptive to the economy or national security,” and requiring “providers of deep synthesis technologies, including companies, research organizations and individuals, to prominently label images, videos, and text as synthetically generated or edited when they could be misconstrued as real.”
7
Similarly, New York City public schools have banned ChatGPT from the district’s networks and devices (with the caveat that schools can request “access to the tool for AI and tech-related educational purposes”). Los Angeles, San Francisco, and Philadelphia school districts are grappling with the issue,3
and, as suggested by Gilat and Cole,CNN Business News
New York City public schools ban access to AI tool that could help students cheat.
New York City public schools ban access to AI tool that could help students cheat.
https://www.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.html
Date accessed: January 11, 2023
1
“some products such as Turnitin—a detection tool that thousands of school districts use to scan the internet for signs of plagiarism—are now looking into how its software could detect the usage of AI-generated text in student submissions.”3
CNN Business News
New York City public schools ban access to AI tool that could help students cheat.
New York City public schools ban access to AI tool that could help students cheat.
https://www.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.html
Date accessed: January 11, 2023
And so readers … did you suspect that the first half of the Letter by Gilat and Cole was written by a bot? I didn’t immediately suspect so. And yet, I was suspicious that something was amiss because: 1) ChatGPT was uncapitalized and spaced incorrectly; 2) the authors failed to initially explain to readers what ChatGPT is and does; 3) there were no references, whereas in many places, references were obviously required; and finally, 4) the last paragraph of the text written by the bot, plus the sentence prior to the last paragraph, added absolutely nothing that hadn’t already been said and should have been deleted as redundant. In my experience as editor, while some novice authors might make some or all of these mistakes, Gilat and Cole are well known to me as frequent, prize-winning, authors, and editors. I was surprised they had submitted such a poorly written letter!
However, as I read on, I realized the ruse. Well done Ron and Brian!
A few other comments. Just as the Internet can contain distortions, resulting in ChatGPT committing errors, user input can also direct chatbot misinformation. According to the Recommendations of the International Committee of Medial Journal Editors, it is authors who bear “responsibility and accountability for published work.”
8
While we reviewers and editors do our best to make articles better and do our very best to detect errors or biases, authors are accountable for their work. Yet, Gilat and Cole’s instructed the bot to provide “insight on the effect of artificial intelligence and tools” and write on “what reviewers and editors should do to adjust and maintain the high scientific standards of the journal” and as a result, the chatbot wrote, “It is therefore important for reviewers and editors to carefully check the articles they review for any errors or biases that may have been introduced by AI tools.”1
Again, reviewers, editors, and our learned readers, must certainly “consider the scientific accuracy and validity”1
of submissions and publications, but from an editorial standpoint, it is authors who are ultimately responsible for checking their articles for errors and bias, whether as a result of AI or otherwise.8
Gilat and Cole, not the bot, also wrote that authors should “not use these tools to compose any part” of a scientific submission, and wrote that in the future, the tools might be used for “specific tasks that do not compromise the integrity and originality of the work and be subjected to meticulous human supervision.”
1
Personally, I’m not so sure authors need to wait to use ChatGPT. Regardless, whether now or later, I agree 100% that authors need to provide “meticulous human supervision” of the chatbot tool, whether this supervision is in the form of a rudimentary spelling check, addition of relevant references, or scholarly review to insure the absence of errors and bias. Finally, I think the Cyberspace Administration of China’s idea that authors should label images, videos and text as synthetically generated, in whole or in part,7
is a good idea. I plan to review this with my fellow editors, and I have queried our publisher as to their view of such a potential policy.Finally, I very much appreciate the forward thinking, academic leadership of Drs. Gilat and Cole. Their letter is stimulating and inspires substantial consideration, due diligence, and a great deal of learning, and should ultimately result in improving our journals.
Very respectfully,
Supplementary Data
- ICMJE author disclosure forms
References
- How will artificial intelligence affect scientific writing, reviewing and editing? The future is here….Arthroscopy. 2023; 39 (XXX-XXX)
- New chatbots can change the world. Can you trust them?.(Available at)https://www.nytimes.com/2022/12/10/technology/ai-chat-bot-chatgpt.html?smid=nytcore-ios-share&referringSource=articleShareDate accessed: January 11, 2023
- New York City public schools ban access to AI tool that could help students cheat.(Available at)https://www.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.htmlDate accessed: January 11, 2023
- A new era of AI blooms even amid the tech gloom.(Available at)
- Did a fourth grader write this? Or the new chatbot?.(Available at)https://www.nytimes.com/interactive/2022/12/26/upshot/chatgpt-child-essays.htmlDate accessed: January 11, 2023
- How A.I. could be used to spread disinformation.(Available at)
- China, a pioneer in regulating algorithms, turns its focus to deepfakes.(Available at: https://www.wsj.com/articles/china-a-pioneer-in-regulating-algorithms-turns-its-focus-to-deepfakes-11673149283?mod=Searchresults_pos1&page=1)Date accessed: January 11, 2023
- International Committee of Medial Journal Editors. Responsibilities in the submission and peer-review process.(Available at)
Article info
Publication history
Published online: January 31, 2023
Publication stage
In Press Journal Pre-ProofIdentification
Copyright
© 2023 by the Arthroscopy Association of North America