Springer Nature, the world’s largest academic publisher, has clarified its policies on the use of AI writing tools in scientific papers. The company announced this week that software like ChatGPT can’t be credited as an author in papers published in its thousands of journals. However, Springer says it has no problem with scientists using AI to help write or generate ideas for research, as long as this contribution is properly disclosed by the authors.
“We felt compelled to clarify our position: for our authors, for our editors, and for ourselves,” Magdalena Skipper, editor-in-chief of Springer Nature’s flagship publication, Nature, tells The Verge. “This new generation of LLM tools — including ChatGPT — has really exploded into the community, which is rightly excited and playing with them, but [also] using them in ways that go beyond how they can genuinely be used at present.”
ChatGPT and earlier large language models (LLMs) have already been named as authors in a small number of published papers, preprints, and scientific articles. However, the nature and degree of the contribution of these tools vary case by case.
In one opinion article published in the journal Oncoscience, ChatGPT is used to argue for taking a certain drug in the context of Pascal’s wager, with the AI-generated text clearly labeled. But in a preprint paper examining the bot’s ability to pass the United States Medical Licensing Examination (USMLE), the only acknowledgment of the bot’s contribution is a sentence stating the program “contributed to the writing of several sections of this manuscript.”
Crediting ChatGPT as an author is “absurd” and “deeply stupid,” say some researchers
In the latter preprint paper, there are no further details on how or where ChatGPT was used to generate text. (The Verge contacted the authors but didn’t hear back in time for publication.) However, the CEO of the company that funded the research, healthcare startup Ansible Health, argued the bot made significant contributions. “The reason why we listed [ChatGPT] as an author was because we believe it actually contributed intellectually to the content of the paper and not just as a subject for its evaluation,” Ansible Health CEO Jack Po told Futurism.
Reaction in the scientific community to papers crediting ChatGPT as an author has been predominantly negative, with social media users calling the decision in the USMLE case “absurd,” “silly,” and “deeply stupid.”
Arguments against giving AI authorship are that software simply can’t fulfill the required duties, as Skipper and Springer Nature explain. “When we think of authorship of scientific papers, of research papers, we don’t just think about writing them,” says Skipper. “There are responsibilities that extend beyond publication, and certainly at the moment these AI tools are not capable of assuming those responsibilities.”
Software cannot be meaningfully accountable for a publication, it cannot claim intellectual property rights for its work, and it cannot correspond with other scientists and the press to explain and answer questions on its work.
If there is broad consensus on crediting AI as an author, though, there is less clarity on the use of AI tools to write a paper, even with proper acknowledgment. This is in part due to well-documented problems with the output of these tools. AI writing software can amplify social biases like sexism and racism and has a tendency to produce “plausible bullshit” — incorrect information presented as fact. (See, for example, CNET’s recent use of AI tools to write articles. The publication later found errors in more than half of those published.)
It’s because of issues like these that some organizations have banned ChatGPT, including schools, colleges, and sites that depend on sharing reliable information, like programming Q&A repository Stack Overflow. Earlier this month, a top academic conference on machine learning banned the use of all AI tools to write papers, though it did say authors could use such software to “polish” and “edit” their work. Exactly where one draws the line between writing and editing is tricky, but for Springer Nature, this use case is also acceptable.
“Our policy is quite clear on this: we don’t prohibit their use as a tool in writing a paper,” Skipper tells The Verge. “What’s fundamental is that there is clarity. About how a paper is put together and what [software] is used. We need transparency, as that lies at the very heart of how science should be done and communicated.”
This is particularly important given the wide range of applications AI can be used for. AI tools can not only generate and paraphrase text but also iterate experiment design or be used to bounce ideas off of, like a machine lab partner. AI-powered software like Semantic Scholar can be used to search for research papers and summarize their contents, and Skipper notes that another opportunity is using AI writing tools to help researchers for whom English is not their first language. “It may be a leveling tool from that perspective,” she says.
Skipper says that banning AI tools in scientific work would be ineffective. “I think we can safely say that outright bans of anything don’t work,” she says. Instead, she says, the scientific community — including researchers, publishers, and conference organizers — needs to come together to work out new norms for disclosure and guardrails for safety.
“It’s incumbent on us as a community to focus on the positive uses and the potential, and then to regulate and curb the potential misuses,” says Skipper. “I’m optimistic that we can do it.”