
Publishers Respond to Generative AI
Author: Samantha Goldman; info@ciddl.org
Just as classroom teachers are considering how ChatGPT will impact their ability to identify plagiarism, publishers are facing similar ethical concerns. The issue can be bubbled down to the fact that AI is not human. One question authors are asking is how do you cite generative AI (such as Bard or ChatGPT)? Another question is if AI is used to write an article, does it get authorship? Additionally, there are concerns over the accuracy of the information from AI. Finally, at what point is the article no longer the author’s original ideas (supported by research)? To discuss these questions, we will provide a summary of guidelines from publishers.
Do I Cite ChatGPT? And, If So, How?
According to APA, one major issue with using Bard, or other similar generative AI, is the fact that the output is not reproducible. Someone could ask an AI the same question multiple times and get a different answer each time. Citing unreproducible sources is not unheard of in APA, as conversations can be cited. However, as previously mentioned, AI is not human and therefore cannot be cited as personal communications. APA’s response to this question is to cite the algorithm’s author both in the in-text citation and the reference list. The intext citation would be (OpenAI, 2023).
Suggested reference:
OpenAI. (2023). ChatGPT (August 3 version) [Large language model]. https://chat.openai.com/chat
To find the version date, log in to ChatGPT and scroll to the bottom.
Do I List the AI as an Author?
Drs. Marino (CIDDL Co-PI), Vasquez (CIDDL Co-PI), Dieker, Basham (CIDDL PI), and Blackorby brought up the question of authorship, when using AI in their recent article. According to the Committee On Publication Ethics (COPE), AI cannot be an author on a paper because it does not meet the criteria: (1) it cannot be responsible for the submitted work and (2) it cannot state if it has conflicts of interest or if it violated copyrights or licensing agreements.
Accuracy… the Big Unknown
Researchers, publishers, and other interested parties have been warning that AI does not always give accurate information. AI makes up citations, provides references to articles that do not exist, and if factually wrong. The problem is that the author (the human) may not know that these inaccuracies are there. For example, if a person doesn’t fact check all of the information AI produces, they will not know if the AI is generating research-based answers. Additionally, authors would need to reference check the articles cited by the AI to ensure that they exist. According to Sage publications, checking the accuracy, validity, and appropriateness of the manuscript falls under the human’s responsibility. This includes ensuring the information from the AI is not plagiarized.
Is the Article Mine if I Use AI?
Deferring back to Sage, the publisher of popular special education journals like the Journal of Special Education Technology, Exceptional Children, and Journal of Teacher Education Special Education, states that, though AI can be used to support authors, it cannot replace them. Humans are creative and can think critically, which the AI cannot do. Sage warns authors to be mindful of the output from the AI, as it could be plagiarized. And, the publisher expects that the use of AI needs to be indicated in the manuscript. There is also an expectation that authors will explore the potential for knowledge gaps, bias, and errors in their manuscript due to the use of AI. Failure to use undisclosed AI could lead to “corrective action”.
Join the Conversation
In our community, we are talking about the use of AI in our teacher and related-service provider preparation programs. Have you used it with students or in your own work? How do you account for the above? Join our community and the conversation!