AI-Policy for Authors
Generative AI technologies are changing the way we use and retrieve information and knowledge. At De Gruyter Brill, we are confident that artificial intelligence brings opportunities to further our mission of increasing the visibility, discoverability, and impact of academic research. At the same time, we are aware of the challenges and risks that come with the advances in generative AI technologies.
Generative AI is evolving quickly, and we will be regularly reviewing and updating these guidelines, to enable our authors to use AI tools in a secure and ethical manner.
Authorship
We do not accept papers (journal articles, book chapters, etc.) that are generated by Artificial Intelligence (AI) or Machine Learning Tools primarily because such tools cannot take responsibility for the submitted work and therefore cannot be considered as authors. Authors remain fully accountable for any work submitted.
Disclosure
If Artificial Intelligence (AI) or machine learning tools or technologies are used as part of the design or methodology of a research study, their use should be clearly described in an acknowledgements section.
The use of AI tools must be disclosed when their output has significantly contributed to any part of your manuscript. The use of such tools for simple proofreading and copy-editing does not have to be declared.
Document which AI tools you use and how you use them throughout your research and writing process. Starting this process early will make it easier to keep track and add the right disclosures later on.
For the purposes of this policy, generative AI tools and large language models (LLMs) are systems that create or transform substantive content (e.g., text, images, code, audio, or data) in response to prompts. Examples include ChatGPT, Copilot, Gemini and others, image generators such as Midjourney or DALL-E, and code assistants like ClaudeCode or Cursor.
AI Generated Images
We do not permit the use of Generative AI tools to create or in any way to manipulate images or figures, or research data in submitted manuscripts. The creation or alteration of experimental images using AI is considered unethical and is strictly prohibited.
Rights, Privacy and Accuracy
As a default, you should assume that everything you enter into an AI system can be stored, processed, and used for training. There are differences between individual tools and sometimes there is a possibility to minimize data processing through settings, but in general, make sure you do not input personal, confidential, or sensitive data.
If possible, choose tools where you can:
- Disable the storage of data for training
- Turn off the memory or tracking history
- Delete your chat history
Copyright
Do not input any copyrighted material into any AI systems. Copyrighted texts may only be fed into AI if the rights owner has granted you the right to do so, and / or if you can make sure that it is not saved and / or used for further AI training.
Always read the terms and conditions of the tools that you choose to use and don’t use any dubious tools.
Privacy & Confidentiality
Always keep applicable data privacy legislation in mind when interacting with any AI tool and inputting any information. If you are in Europe, for instance, refer to the EU AI Act and the GDPR compliance guidelines.
Accuracy
AI can make mistakes. Proof generated output thoroughly for any inaccuracies or biases. Always make sure to fact-check results. As an author you are solely and fully responsible for the work you submit to us, including any parts produced by an AI tool, and are thus liable for any breach of publication ethics.
Peer Review
The manuscript or any parts thereof should not be entered into AI systems, such as Chat GPT, Grammarly, etc. Verifying how these platforms handle data is impossible; thus, any uploads may compromise the authors’ confidentiality, proprietary or data privacy rights, which does not comply with our publishing standards.
Peer review requires critical thinking and nuanced assessment, tasks that fall beyond the capabilities of generative AI and AI-assisted technologies (prone to generate incorrect, incomplete, or biased conclusions). Therefore, the responsibility for peer review lies exclusively with humans.