Focus Track toward a Global Reporting Standard for AI Disclosure in Research

While Artificial Intelligence (AI) can play a valuable role in research, its use must remain transparent, traceable, and responsibly integrated within scholarly practices. Transparency about the use of AI in research articles and other scholarly outputs is an important aspect of research integrity. At present, policies and practices for how to disclose AI use vary widely across disciplines, regions, and publication cultures. Many publishers have begun introducing their own disclosure requirements. These developments underscore the need for a shared, global understanding of how AI contributions should be disclosed in research.

A focus track towards a standard

To address this need, the focus track of the World Conference on Research Integrity in Vancouver, 3–6 May 2026 will work toward a Global Reporting Standard for AI Disclosure in Research. This focus track aims to develop a broadly supported reporting standard that can be used across research disciplines, publication cultures, and organisational contexts. Similar to established reporting tools such as the CRediT taxonomy, such a standard would help align expectations across the research ecosystem, make disclosure practices more consistent and comparable, and facilitate implementation by publishers, institutions, and researchers. By harmonising AI disclosure, we hope to support transparency, reduce uncertainty among authors, and ultimately strengthen research integrity.

A participatory process

To ensure that the development of this reporting standard reflects a wide range of perspectives, some of the biggest relevant players in science and publishing support this initiative: the International Science Council (ISC), the Committee on Publication Ethics (COPE), the Association of Scientific, Technical and Medical Publishers (STM) and the Global Young Academy (GYA), aside the World Conference on Research Integrity Foundation (WCRIF) through this Vancouver Focus Track. The ISC, WCRIF, COPE, STM and GYA invite their members, network and the broader research community to contribute through three consultation rounds:

  1. December 2025 – February 2026: Mapping the needs for AI disclosure, yielding a preferred structured format.
  2. April 2026 – August 2026: Identifying what should be disclosed, yielding content and taxonomies.
  3. End of 2026: Refining the Vancouver Standard based on concrete feedback.

This participatory approach brings together perspectives from editors, academy members, research institutions leadership, funders, libraries, ethics bodies, publishers and experts in research integrity.

How to participate

We invite all relevant stakeholders with an interest in research integrity to share their input through the webform available at the bottom of https://council.science/AIdisclosure.

We encourage you, before filling the webform:

  • to review the questions and preparatory materials (5 pages).
  • to discuss within your organisation or professional community and submit a summary of your collective reflections. (If that is impossible, we also accept individual responses.)

For questions, contact Focus Track Process Lead Bert Seghers at office[at]enrio.eu.

Your contribution is essential to building a shared, global understanding of AI disclosure in research, and to ensuring that this standard is both meaningful and workable across the diversity of the global research ecosystem. 

We warmly thank you in advance for your contribution.