Reporting guidance for the use of artificial intelligence in environmental evidence synthesis
CEE thanks the following authors for providing this guidance – Biljana Macura1, Geoff K Frampton2, Gillian Petrokofsky3, Andrew S Pullin4, Barbara Livoreil5, Shinichi Nakagawa6, Matthew Grainger7
1 Stockholm Environment Institute, HQ, Stockholm Sweden, and CEE Sweden 2 Southampton Health Technology Assessments Centre (SHTAC), University of Southampton, UK 3 Department of Biology, University of Oxford, UK 4 Collaboration for Environmental Evidence, UK 5 Freelance consultant, Coopaname, Paris, France 6 Department of Biological Sciences, University of Alberta, Canada 7 Knowledge Synthesis Department, Norwegian Institute for Nature Research, Trondheim, Norway
With the rise of artificial intelligence (AI), numerous systems and tools are now available to support evidence synthesis across disciplines (Berger-Tal et al. 2024; Ge et al. 2024). However, the integration of AI into evidence synthesis must be undertaken transparently, reproducibly, and ethically with appropriate human oversight and rigorous validation of AI outputs to maintain the rigor of evidence synthesis (Thomas et al. 2025). In this context, AI use in evidence synthesis refers to the application of any predictive or generative AI systems, tools, or processes at any stage of the review process, including planning, searching, deduplication, screening, critical appraisal, data extraction, synthesis, and reporting.
The following guidance provides recommendations for the transparent reporting of AI use in environmental evidence synthesis. It aligns with the Position statement on artificial intelligence (AI) use in evidence synthesis (Flemyng et al. 2025) and the principles of RAISE (Responsible AI in Evidence Synthesis; Thomas et al. 2025) – an international initiative to standardise recommendations for responsible AI use in evidence synthesis. This is a living document that will be regularly updated to reflect ongoing technological development.
For each AI system, tool, or technology (hereafter referred to as an AI system) that is planned or used in evidence syntheses (including reports and protocols of systematic reviews, systematic maps and rapid reviews) authors should report the following:
- Justification and description of the AI system
- Provide the name, version, and developer of the AI system(s)
- Justify use of AI system(s) and reference published validation studies if available
- Describe where, how, and why AI is used in the overall evidence synthesis workflow, specifying the automated or semi-automated tasks involved (e.g., for search strategy planning, screening, data extraction, critical appraisal).
- Report any modifications or custom parameters applied to the AI system.
- For evidence synthesis reports, describe any deviations from the protocol in the use of AI and the reasons.
- Validation of the AI system
- Describe any validation processes conducted to evaluate the performance of the system, the accuracy and reliability of AI-generated outputs. Document the extent of agreement between AI-generated results and human reviewers. Where no validation was conducted, explain the reasons for this.
- Limitations and ethical considerations
- Disclose any known limitations of the AI system’s performance and the steps taken to mitigate these. Report how errors or misclassifications were identified and addressed.
- Discuss any ethical considerations, such as privacy concerns or potential impacts on the integrity of the evidence synthesis process.
- Additional files
- Include validation studies conducted to evaluate AI performance.
- Provide all code and scripts, used to generate or analyse results.
- For AI tools that rely on prompts, supply the exact prompts used along with a brief rationale for the wording and structure of the final prompt.
- Funding and Conflicts of Interest
- Disclose any funding or financial interests related to the AI systems used.
- Report affiliations or potential conflicts of interest with AI developers or providers.
- Ensure ethical, legal and regulatory standards are adhered to as part of applying AI to your synthesis. For example, be aware of issues relating to plagiarism, provenance, copyright, intellectual property, jurisdiction, licensing, and confidentiality, compliance and privacy responsibilities, including data protection laws.
Authors may consider using this template in the Methods section; the text can be adapted for either a protocol or a report:
“We used [AI tool/system/approach name, version] developed by [organization/developer] for [specific purpose(s) in the evidence synthesis process]. The [AI tool/system/approach] was [briefly describe any customization, training, or parameters applied]. Outputs from the [AI tool/system/approach] were [describe validation process]. Limitations of the [AI tool/system/approach] include [describe known limitations, potential biases, and ethical concerns]. A detailed description of the methodology, including parameters and validation procedures as well as [prompts, code snippets,…] is available in [supplementary materials and/or protocol].”
References
Berger-Tal, Oded, Bob B. M. Wong, Carrie Ann Adams, et al. 2024. “Leveraging AI to Improve Evidence Synthesis in Conservation.” Trends in Ecology & Evolution 39 (6): 548–57. https://doi.org/10.1016/j.tree.2024.04.007.
Flemyng, Ella, Anna Noel-Storr, Biljana Macura, et al. 2025. “Position Statement on Artificial Intelligence (AI) Use in Evidence Synthesis across Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence 2025.” Environmental Evidence 14 (1): 20. https://doi.org/10.1186/s13750-025-00374-5.
Ge, Lixia, Rupesh Agrawal, Maxwell Singer, et al. 2024. “Leveraging Artificial Intelligence to Enhance Systematic Reviews in Health Research: Advanced Tools and Challenges.” Systematic Reviews 13 (1): 269. https://doi.org/10.1186/s13643-024-02682-2.
Thomas, James, Ella Flemyng, Anna Noel-Storr, et al. 2025. Responsible AI in Evidence Synthesis (RAISE): Guidance and Recommendations. https://doi.org/10.17605/OSF.IO/FWAUD.
