Science, Elsevier and Nature were quick to react, updating their respective editorial and publishing policies, stating unconditionally that ChatGPT can’t be listed as an author on an academic paper. It is very hard to define exactly how GPT is used in a particular study as some publishers demand, the same way it is near impossible for authors to detail how they used Google as part of their research. Scholarly Kitchen
An app I have found useful every day is Perplexity. I am most taken with the auto-embedded citations of sources in the response, much like we do in research papers. This is most useful for deeper digging into topics. Inside Higher Ed
Tools such as Grammarly, Writeful, and even Microsoft grammar checker are relied upon heavily by authors. If an author is using GPT for language purposes, why would that need to be declared and other tools not? What if authors get their ideas for new research from ChatGPT or have GPT analyze their results but write it up in their own words; might that be ok because the author is technically doing the writing? I believe that self-respecting researchers won’t use GPT as a primary source the same way they don’t use Wikipedia in that manner. However, they can use it in a myriad of other ways including brainstorming, sentence construction, data crunching, and more. The onus of responsibility for the veracity of information still falls on the researcher but that doesn’t mean we should run to ban because some might use it as a way to cut corners. Scholarly Kitchen
An academic paper entitled Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT was published this month in an education journal, describing how artificial intelligence (AI) tools “raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism”. What readers – and indeed the peer reviewers who cleared it for publication – did not know was that the paper itself had been written by the controversial AI chatbot ChatGPT. The Guardian
An application that holds great potential to those of us in higher ed is ChatPDF! It is what you might imagine, a tool that allows you to load a PDF of up to 120 pages in length. You can then apply the now-familiar ChatGPT analysis approach to the document itself. Ask for a summary. Dig into specifics. This will be a useful tool for reviewing research and efficiently understanding complex rulings and other legal documents. Inside Higher Ed
If you’ve used ChatGPT or other AI tools in your research, (for APA) describe (in your academic paper) how you used the tool in your Method section or in a comparable section of your paper. For literature reviews or other types of essays or response or reaction papers, you might describe how you used the tool in your introduction. In your text, provide the prompt you used and then any portion of the relevant text that was generated in response. You may also put the full text of long responses from ChatGPT in an appendix of your paper or in online supplemental materials, so readers have access to the exact text that was generated. If you create appendices or supplemental materials, remember that each should be called out at least once in the body of your APA Style paper. APA Style
Outside of the most empirical subjects, the determinants of academic status will be uniquely human — networking and sheer charisma — making it a great time to reread Dale Carnegie’s How to Win Friends and Influence People. Chronicle of Higher Ed
The US journal Science, announced an updated editorial policy, banning the use of text from ChatGPT and clarifying that the program could not be listed as an author. Leading scientific journals require authors to sign a form declaring that they are accountable for their contribution to the work. Since ChatGPT cannot do this, it cannot be an author. The Guardian
A chatbot was deemed capable of generating quality academic research ideas. This raises fundamental questions around the meaning of creativity and ownership of creative ideas — questions to which nobody yet has solid answers. Our suspicion here is that ChatGPT is particularly strong at taking a set of external texts and connecting them (the essence of a research idea), or taking easily identifiable sections from one document and adjusting them (an example is the data summary — an easily identifiable “text chunk” in most research studies). A relative weakness of the platform became apparent when the task was more complex - when there are too many stages to the conceptual process. The Conversation
Already some researchers are using the technology. Among only the small sample of my work colleagues, I’ve learned that it is being used for such daily tasks as: translating code from one programming language to another, potentially saving hours spent searching web forums for a solution; generating plain-language summaries of published research, or identifying key arguments on a particular topic; and creating bullet points to pull into a presentation or lecture. Chronicle of Higher Ed
For most professors, writing — even bad first drafts or outlines — requires our labor (and sometimes strain) to develop an original thought. If the goal is to write a paper that introduces boundary-breaking new ideas, AI tools might reduce some of the intellectual effort needed to make that happen. Some will see that as a smart use of time, not evidence of intellectual laziness. Chronicle of Higher Ed
The quality of scientific research will erode if academic publishers can't find ways to detect fake AI-generated images in papers. In the best-case scenario, this form of academic fraud will be limited to just paper mill schemes that don't receive much attention anyway. In the worst-case scenario, it will impact even the most reputable journals and scientists with good intentions will waste time and money chasing false ideas they believe to be true. The Register
Many journals’ new policies require that authors disclose use of text-generating tools and ban listing a large language model such as ChatGPT as a co-author, to underscore the human author’s responsibility for ensuring the text’s accuracy. That is the case for Nature and all Springer Nature journals, the JAMA Network, and groups that advise on best practices in publishing, such as the Committee on Publication Ethics and the World Association of Medical Editors. Science
Just as publishers begin to get a grip on manual image manipulation, another threat is emerging. Some researchers may be tempted to use generative AI models to create brand-new fake data rather than altering existing photos and scans. In fact, there is evidence to suggest that sham scientists may be doing this already. A spokesperson for Uncle Sam's defense research agency confirmed it has spotted fake medical images in published science papers that appear to be generated using AI. The Register
Also:
21 quotes about cheating with AI & plagiarism detection
13 quotes worth reading about Generative AI policies & bans
20 quotes worth reading about students using AI
27 quotes about AI & writing assignments
27 thoughts on teaching with AI
22 quotes about cheating with AI & plagiarism detection
13 Quotes worth reading about AI’s impact on College Administrators & Faculty
17 articles about AI & Academic Scholarship