About the CEEDER Database

CEEDER is an open-access evidence service provided by the Collaboration for Environmental Evidence (CEE). The service collates all forms of evidence reviews and syntheses relevant to environmental management and policy interventions and anthropogenic impacts on the environment. Articles are considered for addition to the CEEDER database as they are published or otherwise become available globally.

The database allows decision makers to search for and identify evidence syntheses of relevance to their evidence needs. The added value of the service is that it also provides a critical appraisal of each review’s reliability based on the primary data available for the review and the conduct of the review itself using the CEE Synthesis Appraisal Tool (CEESAT).

Although primarily aimed at decision makers wishing to use evidence to inform their decisions, CEEDER will also aid in improving the reliability of reviews in the environmental sector by offering guidance and resources to authors, editors and peer reviewers.

How to use the database

You can use the keyword search facility together with dropdown menus to select date ranges and review types (see below) to produce a list of relevant reviews. In the list you can then compare the reviews for relevance and reliability using the CEESAT criteria. A link to the full text is provided through the DoI but access will vary due to publisher and institutional arrangements.

Currently CEEDER identifies two main types of reviews or syntheses;

An ‘Evidence Review’ is a review and/or synthesis (e.g. meta-analysis) of primary research findings where the objective is stated as providing an answer to a question or test of a hypothesis relating to effectiveness of an intervention or impact of an exposure.

An ‘Evidence Overview’ is a review of primary research where a main objective is stated as assessing or mapping the distribution and abundance of evidence in primary studies (e.g. geographic and taxonomic patterns for identifying knowledge gluts and gaps), and/or exploring a specific topic of interest to configure bodies of evidence, on a specified question relating to effectiveness of an intervention or impact of an exposure.

Using CEESAT criteria as an indicator of reliability of a review

The CEESAT criteria were developed to critically appraise reviews in terms of transparency, repeatability and risk of bias. For each of 16 elements, a review is rated using four categories (Gold, Green, Amber, Red) of review methodology. The Gold equates to the top standard currently recognised in review conduct by CEE and Red is regarded as unreliable. When finding a review of relevance to your evidence needs you can either use the scores as a whole for judging the review reliability or look at certain elements that you feel are important for the context in which you are working. Although the categories are also given “scores” from 1-4, using total scores or mean scores to compare review reliability is not necessarily meaningful and we advise against this in any context except a crude “eyeballing”. It may be more important to understand what elements of a review score Red or Amber and therefore may be deficient.

How are evidence reviews identified and rated?

Step 1: We perform a systematic search of multiple databases and use search engines to collect potential environmental evidence reviews. Searches are regularly updated.

Step 2: We use a set of eligibility criteria (see below) to screen potential reviews for inclusion in the CEEDER database.

Step 3: Eligible evidence reviews are randomly allocated to Review College members for rating. The members rate reliability of evidence reviews using CEESAT

The scope of reviews in CEEDER

The scope of reviews included in CEEDER currently covers the whole of the environmental sector (environmental science, policy and management) and is global. The specific question or topic of the review should have relevance for policy or practice and there should be intent to synthesise primary studies and provide a measure of effect (e.g. impact of an activity or effectiveness of an intervention). Purely descriptive reviews or ‘expert’ opinion articles are not included unless the authors claim to provide a measure of effect. The specificity of the question varies from broad global issues to precise cause and effect relationships in single species or restricted areas. Human wellbeing is included when there is also a significant environmental component in the question.

Currently we specifically exclude the following subject areas except where a clear link is made to environmental management: animal veterinary science, animal nutrition, animal behaviour, plant physiology, nutrition, improvement, and growth regulation, engineering and construction, biotechnology and bioengineering, human health, education, social welfare and social justice, toxicology, species distribution and abundance (where no cause and effect is addressed).

How we searched for relevant reviews

To find environmental evidence reviews, we currently conduct searches in bibliographic platforms/databases including CAB Direct , Scopus , and Web of Science Core Collection . These searches are updated weekly and monthly. We also use search engines to facilitate capture of non-commercially published (‘grey’) literature and these searches are updated every three months. Collected records are then screened against our eligibility criteria (see below).

Eligibility criteria for included reviews

Decisions on eligibility involve some subjective judgement and we will not always get it right. We welcome feedback from users on what is or is not included in CEEDER. Articles to be included in the CEEDER database are required to meet each of the following eligibility criteria:


Evidence Reviews

The following inclusion criteria are applied for evidence reviews by screeners:

  • Population (P)—any population (biological and/or statistical) of relevance to environmental management.
  • Intervention (I) or Exposure (E)—either an intervention that is imposed to provide an environmental outcome OR a factor to which a population (biological and/or statistical) is exposed.
  • Comparator (C)—an appropriate comparator to enable an estimate of an absolute or relative effect to be measured.
  • Outcome (O)—any change in the population that has a relevance to environmental management.
  • Study type—the article type should be a review and/or synthesis (e.g. meta-analysis) of primary research findings (effectiveness or impact) where the objective is stated as providing an answer to a question or test of a hypothesis relating to effectiveness of intervention or impact of exposure.
  • Subject scope—the specific question or topic of the review should be relevant to environmental management and have recommendations for policy or practice. We specifically exclude the following subject areas except where a clear link is made to environmental management:
    • Animal vet science
    • Animal welfare and rights
    • Animal nutrition (e.g. effect of food additives on livestock)
    • Lab-based studies on plant nutrition, improvement and growth regulation
    • Engineering and construction
    • Biotechnology and bioengineering
    • Pure animal behaviour or lab-based studies
    • Human health and wellbeing
    • Education other than effectiveness of environmental education interventions
    • Social welfare and social justice
    • Toxicology
    • Species distribution
    • Epidemiology (e.g. impact on infection rate)
    • Food sciences (e.g. nutrition and management of post-harvest fruits and vegetables)


Evidence Overviews

The following inclusion criteria are applied for evidence overviews by screeners:

  • Question—the review addresses a question relating to effectiveness of interventions or impacts of exposure (PICO or PECO as above);
  • Study type
  • Subject scope – same as above

Note measure of effect may be provided and/or discussed in evidence overviews but these aspects (data extraction and synthesis of measure of effect) are not assessed.


CEEDER Review College & CEESAT rating

Eligible evidence reviews are randomly allocated to CEEDER Review College members who rate the reliability of evidence reviews using CEESAT (see About CEESAT for criteria used). Several Review College members apply CEESAT for each evidence review to provide consistent rating. Disagreements in rating are resolved by CEEDER Editorial Team.


CEEDER is a free service provided by CEE because of the contribution of volunteers whom we gratefully acknowledge:

CEEDER Executive Team: Jacqui Eales, Geoff Frampton, Ruth Garside, Neal Haddaway, Christian Kohl, Barbara Livoreil, Biljana Macura, Bethan O’Leary, Andrew Pullin, Nicola Randall, Paul Woodcock

CEEDER Editorial Team and Review College

Support from Natural Resources Wales