Citation, Credit & Creativity: Using AI Ethically in Scholarship
- Başak Pırıl Gökayaz

- Sep 10
- 5 min read
Staring at a blank page or racing against a deadline, it’s easy to see why generative AI feels like a lifeline. It can sketch an outline, smooth clunky sentences, or reframe an argument in seconds—yet that same speed raises real questions about originality, authorship, and accountability. If a tool helped shape a paragraph, whose words are they? And will readers trust the work if they can’t see where human judgment ends?
This guide offers practical answers: when to disclose AI use (and how to do it cleanly), how to cite it across APA/MLA/Chicago, where AI is off-limits (confidential or sensitive contexts), and how to keep your voice and ideas at the center. The aim is simple—use AI as an assistant, not a co-author—so your scholarship stays clear, credible, and genuinely yours.
Why Ethics and Transparency Matter While Using AI

AI can genuinely help you think, draft, and revise. But scholarship runs on trust, and trust depends on knowing what (and who) shaped a piece of writing. Ethics and transparency aren’t bureaucratic hurdles—they’re how you show your working so readers can evaluate your claims.
Plagiarism & misattribution: LLMs can echo copyrighted lines or muddle quotes. Always double-check facts and cite the original sources—not the model—when specific ideas or wording show up.
Authorship clarity: AI can’t take responsibility, give consent, or be accountable, so it shouldn’t be listed as an author. If you used a tool, say so—but credit only people as authors.
Peer-review integrity: Drafts, manuscripts, and grant text are confidential. Don’t drop them into public AI tools. (Agencies like the NIH explicitly forbid generative AI in peer review.)
Privacy & data security: Avoid pasting sensitive, proprietary, or personal data into chatbots. Use secure, institution-approved platforms instead.
Copyright & ownership: U.S. copyright protects human authorship. Purely AI-generated material isn’t copyrightable, but your creative choices—prompting with intent, selecting, editing, and arranging—can be. Keep a brief record of your role.
In practice, the rule of thumb is simple: use AI as an assistant, not a co-author. Disclose what you did, verify what it produced, cite the sources it surfaced, protect confidential material, and keep notes on your creative decisions. That’s how you preserve both clarity and credibility.
Citing AI: when and how
Citations aren’t about crediting a robot; they’re about showing readers how the work came together. A brief, clear note makes your process transparent and reproducible, and it helps others judge what was generated, what was edited, and what was uniquely yours. Whatever style you follow, the basics stay the same: name the tool, include the model/version and date, and—when relevant—mention the prompts or context.
When to acknowledge AI
If a model influenced your writing—whether you quoted a line, paraphrased an idea, or used it to spark an argument—say so. Styles differ on format, but the aim is simple: be clear about the tool’s role.
How to format it:
APA (7th): Cite the system/developer (e.g., OpenAI), include the model/version and date, and—if you quote the output—describe prompts in Methods or an appendix.
MLA: Treat your prompt as the “title,” list ChatGPT as the container with version and OpenAI as publisher; add the date and URL.
Chicago: Use an acknowledgment or footnote (often like personal communication). Add a bibliography entry only if the exact output is publicly accessible.
Add a brief methods note: Whatever the style, include a short note on how you used AI (e.g., “drafted the opening paragraph,” “translated interview excerpts,” “suggested an outline”). This helps readers judge reliability and reproduce your process.
Credit, Authorship, and Accountability
Scholarship runs on responsibility. AI can help you draft or refine, but only people can make choices, stand behind claims, and answer for mistakes. Clear credit lines tell readers who did the thinking, who checked the facts, and who is accountable for the final text.
No AI authors: Bodies like ICMJE, COPE, and WAME agree that authorship requires substantial contribution, final approval, and accountability—standards only humans can meet. Acknowledge the tools you used, but list only people as authors.

Human responsibilities:
o Verify every claim and quotation.
o Contribute original analysis and argument beyond any model’s output.
o Attribute underlying sources when the model surfaces identifiable ideas.
o Be transparent about how AI assisted your process.
Editors & reviewers: Journals should set clear AI policies, protect confidentiality, and—where appropriate—use screening tools to spot AI-generated submissions.
Creativity, Originality, and Copyright
AI can jump-start creativity—helping you brainstorm, rephrase, and polish language—but it can’t do the actual interpreting or meaning-making. Keep your ideas, structure, and argument firmly in your hands, and jot down the key editorial choices you make along the way. From a copyright perspective, it’s that human creative control—intentional prompting, selecting and arranging material, and substantive revision—that makes the final work protectable.
Privacy and Confidentiality
Treat public chatbots as not secure for anything sensitive. Stick to your university’s vetted, protected tools, follow local policies, and avoid uploading unpublished manuscripts, grant text, participant information, or other confidential datasets.
A Practical Checklist You Can Copy Into Your Paper
Disclose: Say which tool you used, the model/version, the date, and what it did. Put fuller details in a note or your Methods section.
Cite: Follow the required style (APA/MLA/Chicago). If you’re unsure, add a brief footnote acknowledging AI assistance.
Verify: Fact-check anything that came from a model and, where possible, confirm against or replace with primary sources.
Keep it human: Make sure the argument, interpretation, and voice are yours.
Protect data: Don’t paste sensitive material into public models; stick to approved, secure workflows.
Respect peer review: Never run confidential manuscripts or grant text through generative AI.
Stay current: Policies change—check your journal and institution before you submit.
Used thoughtfully, AI can be a handy collaborator—it can sketch drafts, translate tough passages, and surface new angles—but you remain the author.
Keep the path simple: be transparent about its role, credit people (not tools), cite carefully, protect confidential material, and keep the creative decisions in your hands.
Sources
American Psychological Association. “How to Cite ChatGPT.” APA Style, apastyle.apa.org/blog/how-to-cite-chatgpt.
Committee on Publication Ethics. “Authorship and AI Tools.” COPE, 13 Feb. 2023, publicationethics.org/guidance/cope-position/authorship-and-ai-tools.
Council of Science Editors. “2.2 Authorship and Authorship Responsibilities.” Council of Science Editors, www.councilscienceeditors.org/2-2-authorship-and-authorship-responsibilities.
—. “Recommendations for Promoting Integrity in Scientific Journal Publications.” Council of Science Editors, www.councilscienceeditors.org/recommendations-for-promoting-integrity-in-scientific-journal-publications.
International Committee of Medical Journal Editors. “Defining the Role of Authors and Contributors.” ICMJE Recommendations, www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html.
Modern Language Association of America. “Beyond Citation: Describing AI Use in Your Work.” MLA Style Center, 13 Aug. 2025, style.mla.org/describing-ai-use.
—. “How Do I Cite Generative AI in MLA Style? (Updated and Revised).” MLA Style Center, 13 Aug. 2025, style.mla.org/citing-generative-ai-updated-revised.
University of Oxford. “FAQs for the Policy on Generative AI in Research.” University of Oxford, www.ox.ac.uk/research/support-researchers/research-practice/faqs-policy-generative-ai-research.
—. “Use of Generative AI Tools to Support Learning.” University of Oxford, www.ox.ac.uk/students/academic/guidance/skills/ai-study.
—. “New Ethical Framework to Help Navigate Use of AI in Academic Research.” University of Oxford News, 13 Nov. 2024, www.ox.ac.uk/news/2024-11-13-new-ethical-framework-help-navigate-use-ai-academic-research.
U.S. Copyright Office. Copyright and Artificial Intelligence, Part 2: Copyrightability. 29 Jan. 2025, www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf.
—. “Copyright and Artificial Intelligence.” U.S. Copyright Office, 2025, www.copyright.gov/ai.
—. “Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence.” U.S. Copyright Office, 16 Mar. 2023, www.copyright.gov/ai/ai_policy_guidance.pdf.
World Association of Medical Editors. “Chatbots, Generative AI, and Scholarly Manuscripts: WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publications.” WAME, 31 May 2023, wame.org/page3.php?id=106.
Harvard University Information Technology. “Initial Guidelines for Using ChatGPT and Other Generative AI Tools.” Harvard University Information Technology, 13 July 2023, www.huit.harvard.edu/news/ai-guidelines.
—. “Generative Artificial Intelligence (AI) Guidelines.” Harvard University Information Technology, www.huit.harvard.edu/ai/guidelines.
—. “AI Sandbox.” Harvard University Information Technology, www.huit.harvard.edu/ai-sandbox.
The Chicago Manual of Style. “How Do You Recommend Citing Content Developed or Generated by Artificial Intelligence, Such as ChatGPT?” The Chicago Manual of Style Online, 7 Mar. 2023, www.chicagomanualofstyle.org/qanda/data/faq/topics/Documentation/faq0422.html.

Comments