The CEE Database of Evidence Reviews (CEEDER) is a new open access Evidence Service for decision makers in policy and practice and the general public. The service is run by CEE with the help of a volunteer network of editors, screeners and reviewers. Please read the brief overview below.
We welcome suggestions that will help us develop and improve this service. Please provide feedback via our questionnaire.
- Read more About CEEDER and About CEESAT
- Read the 'Guide for evidence users' below on this page
- Use CEESAT to critically appraise an Evidence Review or Evidence Overview
- Find Support for Authors and Editors and use the Checklist for Editors and Peer Reviewers of evidence reviews
- About CEEDER volunteers
What is CEEDER?
CEEDER is an open access evidence service to help evidence consumers find reliable evidence reviews and syntheses to inform their decision making. The CEEDER project team collates all forms of evidence reviews and syntheses relevant to environmental management, as they are published or otherwise become available globally. The CEEDER database lists available (commercially published and ‘grey’) syntheses of primary research (e.g. critical reviews, meta-analyses, systematic reviews, rapid reviews) conducted to assess evidence on a specific question of environmental policy or management relevance. Most importantly, the database provides an independent assessment, conducted to preset quality criteria (see CEESAT below), of the reliability of each synthesis with respect to its use in decision making. Presentation of the assessment is tailored to the needs of decision makers and other evidence consumers in governmental, non-governmental and private sectors as well as the general public.
The aims of CEEDER are not only to guide evidence consumers to reliable reviews but also to provide tools to authors, editors and peer reviewers for improving the reliability of future reviews in the environmental sector (see Checklist for editors and peer reviewers).
© 2021. CEEDER is licensed under a CC BY-SA 4.0
CEEDER contains research syntheses relevant to decision making in all areas of environmental management and policy. The service is global in scope and we collate the reviews using regular systematic searches or relevant databases and search engines. The service started with reviews published in 2018. Collated reviews are checked against eligibility criteria and the database contains ‘evidence reviews’, that claim to make some form of quantitative assessment of impacts on the environment or effectiveness of interventions that would be of direct interest to management or policy, and will soon include ‘evidence overviews’, that claim to have collated and/or mapped what evidence exists on environmental impacts and interventions.
Purely descriptive reviews or ‘expert’ opinion articles are not included unless the authors claim to provide a measure of effect. The specificity of the question varies from broad global issues to precise cause and effect relationships in single species or restricted areas. Human wellbeing is included when there is also a significant environmental component in the question.
The boundaries of subject scope between environmental management and other sectors is somewhat subjective. At the present time the following areas are not specifically covered: Animal veterinary science, nutrition and behaviour; Plant physiology, nutrition, improvement, and growth regulation; Engineering and construction; Biotechnology and bioengineering; Human health, education, social welfare and social justice; Toxicology. Estimates of change without investigation of cause are also excluded.
How the service can be used
Faced with a decision or question that might be informed by evidence the user can search for relevant syntheses using a simple keyword system (a guide to searching will be provided). Syntheses of potential relevance will be listed together with their ratings for reliability. The user can then choose the most relevant and reliable syntheses, taking note of their limitations. Links will be provided to the location of each synthesis article (Please note: CEEDER cannot provide access to the full text of articles for copyright reasons and therefore a subscription may be required).
Guide for Evidence Users: Using CEESAT criteria as an indicator of reliability of a review
Each synthesis is appraised for its rigour and reliability using an objective 4-point scale using our CEESAT tool. Research syntheses gain credit both for their rigorous conduct and transparent reporting as well as the quality of the primary data and the synthesis itself. Consequently, it is not strictly the scientific merit being judged but the reliability and utility of the synthesis findings for decision making. Syntheses are scored in the context of the question(s) they address.
The CEESAT checklist provides a point by point appraisal of the confidence that can be placed in the findings of an evidence review by assessing the rigour of the methods used in the review, the transparency with which those methods are reported and the limitations imposed on synthesis by the quantity and quality of available primary data. Note that CEESAT does not distinguish between reviews that do not employ methodology that reduces risk of bias and increases reliability of findings and reviews that may have employed such methodology but do not report it.
The CEESAT criteria were developed to critically appraise reviews in terms of transparency, repeatability and risk of bias. For each of 16 elements, a review is rated using four categories (Gold, Green, Amber, Red) of review methodology. The Gold equates to the top standard currently recognised in review conduct by CEE and Red is regarded as unreliable as follows:
Gold: Meets the standards of conduct and/or reporting that reduce risk of bias as much as could reasonably be expected. Lowest risk of bias – high repeatability – highest reliability/confidence in findings.
Green: Acceptable standard of conduct/reporting that reduces risk of bias. Acceptable risk of bias – repeatable – acceptable reliability/confidence in findings.
Amber: Deficiencies in conduct and/or reporting standards such that the risk of bias is increased (above green), alternatively risk of bias may be less easy to assess. Medium risk of bias – not fully repeatable – low reliability/confidence in findings
Red: Serious deficiencies in conduct and/or reporting such that risk of bias is high. High risk if bias – not repeatable – little to no confidence in findings
Any Red ratings in reviews should be considered carefully to decide what impact that may have on the findings. One Red rating does not necessarily mean that you should have no confidence in the findings but it might do if that red rating is for what you consider a crucial element of review conduct (e.g. eligibility criteria).
When finding a review of relevance to your evidence needs you can either use the scores as a whole for judging the review reliability or look at certain elements that you feel are important for the context in which you are working. For example, you may feel that a comprehensive search strategy and clear eligibility criteria are crucial for you to have confidence in the findings, in which case you might want criteria 3 & 4 to be rated Gold or Green.
Although the categories could also be given “scores” (e.g. from 1-4_, using total scores or mean scores to compare review reliability is not necessarily meaningful and we advise against this in any context except a crude “eyeballing”. It may be more important to understand what elements of a review score Red or Amber and therefore may be deficient. If you hover over an individual element of a review you can see an explanation for its rating.
Clearly reviews with ratings that are all Reds and Ambers should be viewed with low confidence, but it does not mean that the findings are wrong. At the other end of the scale, reviews with ratings of mostly Gold and Green can be viewed with high confidence, but it does not mean that the findings are right. Additionally, one or two Ambers and Reds among predominantly Gold and Greens could increase potential for bias and decrease confidence substantially and therefore should be considered carefully.
Finally, the CEEDER ratings are not a substitute for reading the review. There may be other aspects of the review that make it more or less useful to you as a source of evidence.
CEEDER is a free service provided by CEE thanks to the many volunteers that make up the Editorial Board and Review College, whom we gratefully acknowledge.