A gloved hand holds a test tube; colorful test tubes and a ChemRxiv logo are in the foreground.

Science publishing is not an exception to the trend of growing use of artificial intelligence and large language models like ChatGPT. The use of AI tools is not a negative thing per se, but like all aspects of publishing research, transparency and accountability regarding their use are critical for maintaining the integrity of the scholarly record. It is impossible to predict how AI will develop in the coming years, but there is still value in establishing some basic principles for its use in preprints.

After consultation with ChemRxiv’s Scientific Advisory Board, ChemRxiv has made the two following adjustments to its selection criteria to cover the use of AI by our authors:

  • AI tools cannot be listed as an author, as they do not possess the ability to fundamentally review the final draft, give approval for its submission, or take accountability for its content. All co-authors of the text, however, will be accountable for the final content and should carefully check for any errors introduced through the use of an AI tool.
  • The use of AI tools, including the name of the tool and how it was used, should be divulged in the text of the preprint. This note could be in the Materials and Methods, a statement at the end of the manuscript, or another location that works best for the format of the preprint.

Some authors have already used AI language tools to help polish or draft the text of their work, and others have studied their effectiveness in handling chemistry concepts. See some recent preprints related to ChatGPT here.

ChemRxiv authors are welcome to use such tools ethically and responsibly in accordance with our policy. If you have any questions about the use of AI tools in preparing your preprint, please view our Policies page and the author FAQs or contact our team at curator@chemrxiv.org.

Want the latest stories delivered to your inbox each month?