Policy on the Use of Generative AI (GenAI)
Purpose:
This policy aims to clarify the journal’s position regarding the responsible use of Generative Artificial Intelligence (GenAI) tools (e.g., ChatGPT, Gemini, Copilot, Claude, etc.) in research, writing, reviewing, and editorial processes.
1. Acceptable Use
Authors may use GenAI tools only for the following purposes:
- Grammar and language editing
- Text formatting or reference organization
- Idea brainstorming or improving clarity
Such use must not generate original content, data, analysis, or results presented as the author’s own.
2. Disclosure Requirement
If any GenAI tool was used in preparing the manuscript, authors must clearly disclose it in the “Acknowledgments” section or in a note before the References.
Example:
“Portions of this text were assisted by [tool name, version], used for language editing only.”
Failure to disclose may be considered a violation of publication ethics.
3. Authorship
GenAI tools cannot be listed as authors or co-authors. Authorship implies human accountability, responsibility, and consent — which AI systems cannot provide.
4. Data Integrity and Responsibility
Authors bear full responsibility for the accuracy, originality, and ethical integrity of all content, including text or figures generated with GenAI support.
5. Peer Review and Editorial Use
Editors and reviewers may use GenAI tools only for language assistance or organization, not for decision-making, content generation, or evaluation of manuscripts.
6. Ethical Alignment
This policy aligns with international publishing standards, including:
- COPE (Committee on Publication Ethics) guidelines
- Elsevier and Springer Nature GenAI editorial policies
- Scopus content ethics criteria
