|SUMMARY OF CEE STANDARDS OF CONDUCT AND REPORTING FOR SYSTEMATIC REVIEWS AND SYSTEMATIC MAPS OF EVIDENCE|
|Relevant review Type (SR/SM)Review Document. Review Stage||Minimum Standard||Rationale/explanation|
|Planning||SR and SM|
|1.1||R||Report. Methods||The review should cite a separate a-priori protocol containing details of conduct of all review and synthesis stages (e.g. question, search, screening, critical appraisal, data extraction and synthesis). The protocol should have been publicly accessible prior to the conduct of the review.||A protocol is a document describing the methods to be used, produced prior to the commencement of an evidence synthesis. It describes the background to the synthesis, the questions, the strategy that will be used to search for primary research articles, and the criteria for deciding whether or not an article is then relevant to include in the synthesis. The protocol should also outline the approach to assessing the quality of each included study, and to extracting and synthesising data from primary research studies. Writing a protocol is therefore analogous with developing and documenting a methodology prior to conducting fieldwork or experiments and is similarly integral to producing a study that is robust against post hoc changes in methods and scope.|
|1.2||R||Protocol. Objectives and question elements||The review question or hypothesis should be clearly stated, and key-elements of the question clearly defined in terms of PICO, PECO, PO, PIT, etc. frameworks||A well-defined question (or hypothesis) is crucial for assessing the reliability of subsequent decisions on searching and screening for eligible studies, as well as forming the basis for critical appraisal of study conduct and for data extraction and synthesis.|
|1.3||R||Protocol. Stakeholder engagement||The protocol background should explain the demand for the question and, if relevant, how the question was developed through stakeholder consultation.||The relationship between the review question and evidence needs of stakeholders should be clear.|
|Article Searching||SR and SM|
|2.1||C/R||Protocol. Search sources||
Sources of articles used for searching should be stated and should capture both commercially published scientific literature and grey literature (may or may not be peer-reviewed).
Searches should use a combination of databases, search engines and specialist websites (may also be informed by stakeholders) or limitations are fully justified.
NB. Statements such as ‘We considered only peer-reviewed material because this is more reliable than grey literature’ without evidence that the methodological quality of potentially relevant grey literature was assessed do not indicate that grey literature was objectively considered.
|The resources used to find relevant literature influence the comprehensiveness and reliability of the synthesis. The principal sources for locating peer-reviewed articles are electronic databases of scientific literature and academic search engines, with a range of supplementary methods. No single database indexes all peer-reviewed articles. Moreover, these sources are unlikely to capture potentially relevant grey literature (e.g., reports by governmental and non-governmental organisations, unpublished studies) and consequently can be complemented by searching thesis repositories, websites of relevant organisations and conducting internet searches. Other supplementary search strategies include citation chasing (backwards and forwards), and contact with experts in the field.|
|2.2||C||Protocol. Search scope||Searches should include appropriate national, regional, and subject specific bibliographic databases and organisational websites.||The resources used to find relevant literature influence the comprehensiveness and reliability of the synthesis. To minimise bias the choice of sources should be both comprehensive and appropriate for the scope of the question.|
|2.3||R||Protocol. Search strategy||Details of the planned search strategy to be used, including: database names accessed, institutional subscriptions (or date ranges subscribed for each database), search options (e.g. ‘topic words’ or ‘full text’ search facility), efforts to source grey literature, other sources of evidence (e.g. hand searching, calls for evidence/submission of evidence by stakeholders).||An optimal search for literature should aim to maximise comprehensiveness (aiming to identify all relevant studies) and transparency (readers should be able to replicate and evaluate the search). This is to avoid ‘cherry-picking’ studies or assembling a biased or unrepresentative body of evidence. Where possible, advice should be sought from an expert such as an information specialist/scientist.|
|2.4||R||Protocol. Search strategy limitations||Use of any restrictions in the search strategy on publication date, publication format, or language should be stated and justified.||Limitations imposed on searching such as publication date, publication format, or language can increase potential for bias and so any limitations should be considered and justified in terms of this risk.|
|2.5||C||Protocol. Search comprehensiveness||A comprehensiveness test of the search should be conducted.||Demonstrating that the search strategy is comprehensive and therefore reduces potential for bias is a fundamental requirement of systematic reviews. A comprehensiveness test is typically done using by measuring capture of a test list of known eligible articles that have been compiled independently of the search (e.g. from existing reviews or stakeholder recommendations)|
|2.5||R||Protocol. Search comprehensiveness||Describe the process by which the comprehensiveness of the search strategy was assessed (i.e. list of benchmark articles), and the outcomes of the tests.||As above|
|2.7||R||Report. Search conduct||All search terms and/or strings, Boolean operators (‘AND’, ‘OR’ etc.) and wildcards should be clearly provided (in text or additional files) so that the exact search is replicable by a third party.||Replicability of the search is an essential component of transparency of the review and for readers to be confident that the search is comprehensive.|
|2.8||R||Report. Search conduct||Comprehensive information should be given about the databases and websites searched and search engines used (including any search options or settings chosen), together with dates of searches. A clear account provided of grey literature and supplementary searches conducted.||As above|
|2.9||R||Report. Search updates||Any update to searches undertaken during the conduct of the review should be reported and justified.||Updates can be important for validity of the review when the conduct has taken significant time. Updates should be consistent with the original search|
|2.10||R||Report. Search limitations||Limitations due to, for example, language or publication date should be considered in terms of risk of bias.||The ‘Review limitations’ section should consider the implications of limitations imposed on the search.|
|Article Screening||SR and SM|
|3.1||R||Protocol. Screening – eligibility criteria||Eligibility criteria should be precisely defined (e.g. reliance on broad and potentially ambiguous terms should be avoided) and all key elements of the question included. Other criteria, such as study design and geographical limits should be defined and justified.||Clearly stated criteria for eligibility decisions minimise the potential for subjective decisions to influence which studies are included in the review, increase the transparency of the synthesis, and allow readers to assess the validity of the criteria to the review question. In addition to following the review question, eligibility criteria may define limits on the type of primary research to be considered in terms of (for example): geographic scope, type of data reported, type of intervention or impact, study design, date.|
|3.2||R||Report. Screening – eligibility criteria||Eligibility criteria should be consistent between a-priori Protocol and Review or differences fully explained.||Any post-hoc change in the eligibility criteria could increase potential for bias and should be fully reported and justified in those terms.|
|3.3||C||Report. Screening strategy||Eligibility criteria should be independently applied by more than one reviewer, ideally to all articles screened at title, abstract and full text stages. Pragmatic decisions about dual screening of subsamples only may be acceptable when large numbers of articles are screened. In such a case the rationale and methods of subsampling should be fully described and justified||More than one person should screen studies for inclusion to reduce the risk of human error and to ensure that the criteria are applied consistently to the articles returned by the search. If more than one person independently evaluates the relevance of the same articles, the consistency of inclusion/exclusion decisions can be assessed. Piloting the criteria, and discussing and refining the eligibility decisions can also ensure they are consistently applied.|
|3.4||C/R||Report. Screening consistency||Consistency of screening decisions should be measured and reported and all disagreements between reviewers discussed and resolved so as to inform subsequent decisions.||As above|
|3.5||R||Report. Screening outcomes||The number of unique articles found during the search (after removal of duplicates) should be reported and the number excluded at each stage of the screening process fully presented (e.g. in a flow diagram or table).||Listing all articles that were screened for eligibility and indicating whether each was included or excluded in data synthesis (usually as supplementary material), makes it clear whether potentially relevant studies have been omitted according to the eligibility criteria or were not captured by the search.|
|3.6||R||Report. Screening decisions||The reasons for exclusion of each individual article/study considered at full-text should be reported (e.g. in additional files).||Documenting the reasons for article exclusion at full text is essential for transparency. Allowing readers to see judge consistency and objectivity of decisions.|
|3.7||R||Review. Screening decisions||A list should be provided of any articles which had unclear eligibility status after completion of full-text screening (with explanation why they could not be classified) and of any articles that could not be obtained for full-text screening.|
|3.8||R||Review. Screening outcomes||The final list of eligible studies should be provided separately from the full report reference list.||As above|
|Data Coding and Extraction||Data coding standards relate to both SR and SM. Data extraction (extraction of primary study results) relate to SR only|
|4.1||R||Protocol. Data coding and extraction strategy||Methods by which meta-data and raw data from each study are to be coded and extracted should be stated in the Protocol so that the process can be replicated and confirmed in the final report or deviations are reported and justified.||Transparently identifying a consistent set of data to extract from each study, for example into a structured data extraction sheet, allows the process to be replicated and evaluated by a third party, and reduces the potential for bias over which data are extracted from individual studies. Typically, extracted information from each study included in the review comprises at least: study aims; intervention details, study design; population characteristics; comparator details and results (point estimates and measures of variance).|
|4.2||R||Report. Data coding and extraction – records||All data coded or selected for extraction should be provided in a table or spreadsheet as set out in the a-priori Protocol (this includes data used in the synthesis for each study, e.g. outcome metrics or effect size, and meta-data).||Providing a summary table or spreadsheet in which the metadata on population, intervention/exposure and study design, and data on outcome for each study are stated makes data extraction transparent, and makes it easier for readers to locate the most relevant primary literature and conduct supplementary analyses if required.|
|4.3||C||Report. Data coding and extraction – consistency||Data coded or extracted from each study should be cross checked by at least two independent reviewers. If not, an explanation should be provided of how a sample of coded or extracted data was cross checked between two or more reviewers.||Checking data extraction improves accuracy by ensuring the correct data are extracted for each element and reduces the risk of errors due to interpretation or transcription|
|4.4||R||Report. Data Coding and Extraction||Any process for obtaining and confirming missing or unclear information or data from authors should be described.|
|Study validity||Compulsory for SR. Optional for SM|
|5.1||C||Protocol. Critical appraisal of study validity||An effort should be made to identify all relevant sources of bias (threats to internal and external validity)||
Documented critical appraisal, as applied to each individual each included study, using relevant, pre-defined critical appraisal criteria allows the author(s) of the synthesis and the reader to make more objective assessments of the relative reliability (or weighting) of each study.
Some potentially relevant studies may not meet baseline methodological requirements (e.g. small sample size, pseudoreplication, spatial autocorrelation, lack of appropriate controls etc.) and so may be excluded from the synthesis. Effectively, these studies are weighted as ‘zero’.
Studies included in the synthesis may be treated differently according to the rigour of the sampling design, according to differences in sampling effectiveness (e.g. sample size, sampling area, study duration, etc.), or according to their generalisability for the synthesis in hand (e.g. spatial scale, study setting, etc.). Where possible, available risk of bias tools should be used and adapted rather than developing ad hoc criteria
|5.2||C||Report. Critical appraisal of study validity||Each relevant type of bias (threat to internal and external validity) should be assessed individually for all included studies||As above|
|5.3||R||Report. Critical appraisal of study validity||Results should be reported using a critical appraisal sheet constructed and tested at the protocol stage.||Use of explicit appraisal criteria that are developed in advance of the process avoids post hoc decisions being made about study validity and changes to appraisal criteria during the assessment process.|
|5.4||R||Report. Critical appraisal of study validity||Critical appraisal criteria should be consistent between a-priori Protocol and review or differences fully explained.||Any post hoc changes in critical appraisal criteria could increase risk of bias and should be fully reported and justified in this context.|
|5.5||C/R||Report. Critical appraisal of study validity – consistency||At least two people should independently critically appraise each study with disagreements and process of resolution reported.||Employing more than one reviewer reduces risk to bias in critical appraisal decisions or errors in judgement.|
|5.6||R||Report. Critical appraisal of study validity – implementation||A description should be provided of how the information from critical appraisal was used in synthesis.|
|Data Synthesis||SR only|
|6.1||R||Protocol. Data synthesis – strategy||The choice of synthesis method (e.g. narrative synthesis only or with meta-analysis) should be justified in the Protocol on the basis of scoping characteristics of included studies, taking into consideration variability between studies in sample size, study design, context, etc.||If appropriate, data should be pooled in a quantitative synthesis (e.g. meta-analysis, meta-regression). If substantial differences between populations, interventions, comparators or outcomes exist, meta-analysis (i.e. combining effect sizes across different studies) may not be appropriate. Since meta-analysis effectively treats all individual studies part of one large study, meta-analysis is only appropriate when calculating an average effect is meaningful.|
|6.2||R||Protocol. Data synthesis – limitations||Where meta-analysis is not to be conducted, a reason for this should be given.||If it is not appropriate to pool data across studies in meta-analysis, a reason for this should be given, and structured approach to some other quantitative or narrative synthesis taken, with efforts made to make sense of the whole of the data set, beyond describing results from individual studies in turn, noting differences in the weight of evidence behind statements made, and appropriate use of table and graphical presentations of results. Vote-counting (summing the studies which gave positive or negative findings) is not an appropriate synthesis method as an indication of impact or effectiveness.|
|6.3||R/C||Report. Data synthesis – methods||If meta-analysis is conducted, full details of methods should be presented that justify the approach and enable replication, including study weighting and sensitivity analysis.||Ability of a third party to replicate the analysis is vital for transparency and reliability of the review.|
|6.4||C||Report. Data synthesis – methods||Consideration should be given to study independence and risk of bias (e.g. through sensitivity analysis).||Comparing subsets of studies that differ in terms of independence or risk of bias may indicate how sensitive estimates of effect are to these factors.|
|6.5||C||Report. Data synthesis – Effect modifiers||Effects modifiers (e.g. taxa being considered, location, habitat type, study design etc.) should be investigated statistically through meta-analysis, or descriptively in narrative synthesis.||Studies differ in their results (heterogeneity), which may be due to chance, but could also reflect variables other than the factor of interest that differ between studies (effect modifiers). The presence and magnitude of effect modifiers can reveal important information about a system. Investigating heterogeneity therefore indicates the degree to which effects are generalisable across taxa, regions etc., and is also necessary to evaluate the appropriateness of combining studies conducted on different populations or reporting different outcome metrics. These can be investigated statistically in meta-analysis through for example subgroup analyses, sensitivity analyses and meta-regression. In narrative syntheses differences in findings may be discussed in terms of differences in study design, context, population, focus etc.|
|6.6||C||Report. Data synthesis – study validity||Results of critical appraisal of study validity should be used in considering individual study findings through statistical or narrative synthesis.||The attribution of validity estimates to studies should be used and explored when synthesising data to estimate mean effects.|
|6.7||C||Report. Data synthesis – narrative synthesis||The narrative synthesis should describe the body of evidence identified using figures and tables that supply information on all eligible studies.|
SR can be relevant
|7.1||R/C||Protocol. Study mapping strategy||The choice of mapping and visualisation methods should be justified in the Protocol on the basis of scoping characteristics of included studies, taking into consideration variability between studies.|
|7.2||R||Report. Narrative synthesis||A narrative synthesis should describe the body of evidence identified using figures and tables.|
|7.3||R||Report. Map database||The database of eligible studies with all coded data forming the map should be presented in an additional file.|
|7.4||R/C||Report. Map visualisation||A map or maps (e.g. geographical) and/or alternative visualisation (e.g. ‘heat map’) should be provided|
|Review limitations and Implications||SR and SM|
|8.1||R||Report limitations||An explicit section must be devoted to the authors’ consideration of limitations of their review including limitations of the primary data (available evidence), possible sources of bias in the review process, conduct of the review process and recommendations made for future syntheses and primary research.||All reviews will have limitations and it is important that authors are explicit about the known limitations of the primary data and the conduct of the review process.|
|8.2||R||Report. Conclusions – implications for policy||Summarise the main implication for policy or other decision makers. Avoid making recommendations||The aim here is to inform decision makers of the evidence and what it suggests, but not to recommend or advocate actions or policies|
|8.3||R||Report. Conclusions – implications for research||Summarise the main implication for research, including evidence gaps and study design requirements.||Given the evidence base the review has synthesised, authors should summarise future evidence needs|
|Other information||SR and SM|
|9.1||R||Protocol and Review. Competing interests||
All financial and non-financial competing interests must be declared in this section. See our editorial policies in the Environmental Evidence submission guidelines.
If you do not have any competing interests, please state “The authors declare that they have no competing interests” in this section.
|9.2||R||Protocol and Review. Funding sources||All sources of funding for the research reported should be declared. The role of the funding body in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript should be declared.|
|9.3||R||Protocol and Review. Authors’ contributions||The individual contributions of authors to the manuscript should be specified in this section. Guidance and criteria for authorship can be found in our journal submission guidelines|
|9.4||R||Protocol and Review. Acknowledgements||
Please acknowledge anyone who contributed towards the article who does not meet the criteria for authorship including anyone who provided professional writing services or materials.
Authors should obtain permission to acknowledge from all those mentioned in the Acknowledgements section. See our journal editorial policies for a full explanation of acknowledgements and authorship criteria. If you do not have anyone to acknowledge, please write “Not applicable” in this section.
|9.5||R||Protocol and Review. Additional files||Additional files should include a title and description of data (e.g. using ‘ReadMe’ tab), with appropriate keys to column and row headings.|