• Profile
Close

Using AI in medical research publishing: Guidance for HCPs

MDlinx Jan 25, 2024

Generative AI has the capacity to produce text, imagery, audio, and synthetic data. AI platforms include ChatGPT, DALL-E, Jasper AI, NovelAI, and Rytr AI, and their increasing use in the medical research world necessitates the establishment of guidelines to offset the potential for plagiarism, the sharing of misinformation, and other risks to the credibility of highly regarded academic journals.

Elsevier updated its authorship policies in 2023 to offer guidance on how physicians and other HCPs can use AI tools in manuscript writing. This guidance serves as a good springboard to assess the implications of AI in medical publishing.

 

AI and authorship

 

Elsevier’s new editorial policy on the use of generative AI in publication aims to proffer increased transparency and guidance to authors, editors, and reviewers.

Publishing ethics. Elsevier.

 

 

According to Elsevier’s new policy, authors are asked to acknowledge that AI was used in writing the publication. The policy also implicates physicians as responsible for any inaccuracies that result from AI-generated text.

AI tools, however, cannot be listed as authors. Other publishers in addition to Elsevier have also adopted this rule.

Hufton AL. No artificial intelligence authors, for now. Patterns (N Y). 2023;4(4):100731.

 

Authors can use generative AI or AI-assisted technologies before submission as a means to enhance the language and readability of the document. Authors must appropriately disclose their use of AI in these scenarios (eg, as a disclosure in a separate section before the list of references).

 

AI and peer review

 

The problem with using generative AI during peer review is that uploading another’s work to an AI platform for feedback would violate confidentiality.

Elsevier’s AI policy cautions that the original author’s proprietary rights could be violated, and any personally identifiable information could breach privacy rights.

Similarly, the peer review report generated by the peer reviewer shouldn’t be uploaded to an AI platform.

“Peer review is at the heart of the scientific ecosystem and Elsevier abides by the highest standards of integrity in this process,” per the publisher. “Reviewing a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI or AI-assisted technologies should not be used by reviewers to assist in the scientific review of a paper as the critical thinking and original assessment needed for peer review is outside of the scope of this technology and there is a risk that the technology will generate incorrect, incomplete or biased conclusions about the manuscript. The reviewer is responsible and accountable for the content of the review report.”

 

The risk of fabricating research

 

 

Another pitfall of generative AI is that it could be used to fabricate medical research. The allure of fame, the high-pressure nature of medical research, and the pursuit of funding may compel a researcher to fabricate data using AI.

One area of particular concern is that medical students may fabricate research to distinguish themselves from other applicants competing for coveted spots in ultracompetitive specialties such as plastic surgery or dermatology. Because the USMLE Step 1 exam switched to Pass/Fail in 2022, there are fewer metrics by which to evaluate medical students for such spots. Some candidates may be tempted to fabricate research publications to enhance their qualifications.

“The feasibility of producing fabricated work, coupled with the difficult-to-detect nature of published works and the lack of AI-detection technologies, creates an opportunistic atmosphere for fraudulent research,” according to an editorial published in Patterns.

Elali FR, Rachid LN. AI-generated research paper fabrication and plagiarism in the scientific community. Patterns (N Y). 2023;4(3):100706.

 

 

“Risks of AI-generated research include the utilization of said work to alter and implement new healthcare policies, standards of care, and interventional therapeutics," the authors of the editorial state. They note that researchers may be inclined to use AI to “streamline mundane processes in the research field,” but it can “pollute the field of scientific research and undermine the legitimate works produced by other authors.”

 

What this means for you

AI is constantly evolving, and any attempts to align its massive potential with the requirements for ethical publication will evolve along with it. Currently, AI tools lack the agency and independence to responsibly author content or consent to publication. These limitations keep AI from outright authoring studies and peer-reviewed articles. At present, AI is permissible in efforts to enhance the verbiage and organization of publications, but AI should not be used during peer review. Fabrication of data using AI is also a major concern, which will be addressed on an ongoing basis with the development of better detection tools.

 

Only Doctors with an M3 India account can read this article. Sign up for free or login with your existing account.
4 reasons why Doctors love M3 India
  • Exclusive Write-ups & Webinars by KOLs

  • Nonloggedininfinity icon
    Daily Quiz by specialty
  • Nonloggedinlock icon
    Paid Market Research Surveys
  • Case discussions, News & Journals' summaries
Sign-up / Log In
x
M3 app logo
Choose easy access to M3 India from your mobile!


M3 instruc arrow
Add M3 India to your Home screen
Tap  Chrome menu  and select "Add to Home screen" to pin the M3 India App to your Home screen
Okay