About the CEEDER Database

Faced with a decision or question that might be informed by evidence, this database allows users to search for evidence reviews and evidence overviews (including literature reviews, meta-analyses, critical reviews, systematic reviews, rapid reviews) of relevance to their evidence needs.

The database is collated through a systematic search of commercially published journals and grey literature sources and covers the whole environmental sector. In addition, CEEDER provides an assessment of the reliability of each review based on conduct and reporting using the CEESAT review appraisal tool.

Search for reviews matching your criteria of interest below to begin exploring the
CEEDER database.

 

How to use the database
Each synthesis is appraised for its rigour and reliability using an objective 4-point scale using our CEESAT tool. Research syntheses gain credit both for their rigorous conduct and transparent reporting as well as the quality of the primary data and the synthesis itself. Consequently, it is not strictly the scientific merit being judged but the reliability and utility of the synthesis findings for decision making. Syntheses are scored in the context of the question(s) they address.

The CEESAT checklist provides a point by point appraisal of the confidence that can be placed in the findings of an evidence review by assessing the rigour of the methods used in the review, the transparency with which those methods are reported and the limitations imposed on synthesis by the quantity and quality of available primary data. Note that CEESAT does not distinguish between reviews that do not employ methodology that reduces risk of bias and increases reliability of findings and reviews that may have employed such methodology but do not report it.

The CEESAT criteria were developed to critically appraise reviews in terms of transparency, repeatability and risk of bias. For each of 16 elements, a review is rated using four categories (Gold, Green, Amber, Red) of review methodology. The Gold equates to the top standard currently recognised in review conduct by CEE and Red is regarded as unreliable as follows:

  • Gold: Meets the standards of conduct and/or reporting that reduce risk of bias as much as could reasonably be expected. Lowest risk of bias – high repeatability – highest reliability/confidence in findings.
  • Green: Acceptable standard of conduct/reporting that reduces risk of bias. Acceptable risk of bias – repeatable – acceptable reliability/confidence in findings.
  • Amber: Deficiencies in conduct and/or reporting standards such that the risk of bias is increased (above green), alternatively risk of bias may be less easy to assess. Medium risk of bias – not fully repeatable – low reliability/confidence in findings
  • Red: Serious deficiencies in conduct and/or reporting such that risk of bias is high. High risk if bias – not repeatable – little to no confidence in findings

Any Red ratings in reviews should be considered carefully to decide what impact that may have on the findings. One Red rating does not necessarily mean that you should have no confidence in the findings but it might do if that red rating is for what you consider a crucial element of review conduct (e.g. eligibility criteria).

When finding a review of relevance to your evidence needs you can either use the scores as a whole for judging the review reliability or look at certain elements that you feel are important for the context in which you are working. For example, you may feel that a comprehensive search strategy and clear eligibility criteria are crucial for you to have confidence in the findings, in which case you might want criteria 3 & 4 to be rated Gold or Green.

Although the categories could also be given “scores” (e.g. from 1-4_, using total scores or mean scores to compare review reliability is not necessarily meaningful and we advise against this in any context except a crude “eyeballing”. It may be more important to understand what elements of a review score Red or Amber and therefore may be deficient. If you hover over an individual element of a review you can see an explanation for its rating.

Clearly reviews with ratings that are all Reds and Ambers should be viewed with low confidence, but it does not mean that the findings are wrong. At the other end of the scale, reviews with ratings of mostly Gold and Green can be viewed with high confidence, but it does not mean that the findings are right. Additionally, one or two Ambers and Reds among predominantly Gold and Greens could increase potential for bias and decrease confidence substantially and therefore should be considered carefully.

Finally, the CEEDER ratings are not a substitute for reading the review. There may be other aspects of the review that make it more or less useful to you as a source of evidence.