Intended for healthcare professionals

Education And Debate

Systematic Reviews: Reporting, updating, and correcting systematic reviews of the effects of health care

BMJ 1994; 309 doi: https://doi.org/10.1136/bmj.309.6958.862 (Published 01 October 1994) Cite this as: BMJ 1994;309:862
  1. I Chalmers,
  2. B Haynes
  1. UK Cochrane Centre, NHS R Programme, Oxford OX2 7LG Canadian Cochrane Centre, Health Information Research Unit, McMaster University Medical Centre, Hamilton, Ontario L8N 3Z5, Canada.

    The recent growth in the numbers of published systematic reviews reflects growing recognition of their importance for improving knowledge about the effects of health care. In Britain the NHS R'D Programme has established two centres to prepare systematic reviews of existing information, and the Cochrane Collaboration - an international network of individuals and institutions - evolved to produce systematic, periodically updated reviews of randomised controlled trials. The large amount of existing evidence that needs to be considered creates a problem for the reporting of systematic reviews: the need to ensure that methods and results of systematic reviews are adequately described has to be reconciled with the limited space available in printed journals. A possible solution is the use of electronic publications: reviews could be published simultaneously in a short, printed from and in a more detailed electronic form. Electronic publications also have the advantage of the ease with which reviews may be updated as new evidence becomes available or mistakes are identified.

    Primary and secondary research on the effects of health care: the dangerous consequences of double standards

    It was not until very recently that anyone drew attention to the fact that clinical investigators usually jettison scientific principles when they move from primary research to secondary research (reviews). Mulrow, in 1987, who first showed that this double standard was manifest in some of the world's leading medical journals,1 and Huth, in an accompanying editorial in Annals of Internal Medicine, said that something ought to be done about it.2 The following year Oxman and Guyatt published guidelines to help people to judge the scientific quality and trustworthiness of reviews.3

    The failure of clinical investigators to apply scientific principles to control biases and imprecision in their reviews of evidence about the effects of care can have serious consequences. For …

    View Full Text

    Log in

    Log in through your institution

    Subscribe

    * For online subscription