What is the evidence that postgraduate teaching in evidence based medicine changes anything? A systematic review
BMJ 2004; 329 doi: https://doi.org/10.1136/bmj.329.7473.1017 (Published 28 October 2004) Cite this as: BMJ 2004;329:1017All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Authors’ reply
Editor,
We agree with van der Berg that the findings of our review concerning
clinically integrated teaching are likely to be consistent with the
beliefs of vast majority of medical teachers and learners, and may have
wider generalisability. We also agree with the suggestion that standalone
teaching will, however, have its own benefits including that of networking
between those with an interest in EBM, and therefore, it is reasonable to
propose that standalone teaching should not be abandoned, but the cost-
effectiveness of an effort with limited benefits should be carefully
considered.(1)
We note that Pozo was not surprised by the findings as “it is widely
accepted that clinically integrated teaching helps students to consolidate
knowledge and practise skills” – and we agree. Pozo is correct also in
pointing out that there were no head-to-head comparisons between
slandalone and clinically-integrated teaching and that we have relied on
indirect comparison of the teaching methods. However, Pozo’s labelling of
an indirect comparison as “not a true comparison” needs reconsideration
as, at least in the case of indirect comparisons within randomised trials
through a common third comparator, empirical evidence shows that indirect
comparisons agree well with direct comparisons.(2)
We agree with Pozo’s observation of heterogeneity in the studies.
Many studies gave little information about the participants, teaching
methods, and assessment tools or outcome measures. However, it was clear
that they varied substantially in not only their teaching methods,
assessment tools and outcomes, but also the methodological quality. Such
paucity of reporting combined with inherent heterogeneity was the reason
we settled for a qualitative synthesis in the form of ‘vote-counting’.
However, the vote-counting was conducted within broad subgroups of
teaching methods and educational outcomes stratified by study methodology
- This type of approach to minimise bias in ‘vote-counting’ by
incorporating quality has previously been used to synthesise heterogeneous
results.(3) To determine whether a study showed improvement or no change
in our vote-counting, we relied on numerical data, and when this was
unavailable, on authors’ own reported inferences.
We entirely agree with Pozo’s view that any teaching programme needs
to be tailored to the student’s learning needs as well as take into
account local views, preferences and resources, and take a multi-faceted
approach (4) for it to be effective. We believe this is what integration
of teaching into clinical practice can achieve.
We thank Grimshaw and Mayhew for pointing out our mistake in
misclassifying the study by McGinn et al (5) as a randomised trial, and we
apologise for this error. However, on reanalysis with reassignment of this
study (5) to the non-randomised group, we find that the weight of evidence
still clearly favours an integrated teaching method over standalone
teaching for each of the important outcomes (see Corrected Figure 1
below). Randomised evidence,(6) well supported by non-randomised evidence,
still shows that skills, attitudes and behaviour improve with an
integrated method of teaching, whilst none of the randomised trials in the
standalone group has shown an improvement in skill, attitudes or
behaviour. While more evidence is always welcome, we feel that there in
compelling evidence, supported by commonsense and theory (see Figure 2 in
the original article), that the focus should now be on implementation.
Arri Coomarasamy, Registrar in Obstetrics and Gynaecology
Khalid S Khan, Consultant in Obstetrics and Gynaecology
Education Resource Centre,
Birmingham Women’s Hospital,
Metchley Park Road,
Birmingham B15 2TG, UK
References
1. Taylor RS, Reeves BC, Ewings PE, Taylor RJ. Critical appraisal
skills training for health care professionals: A randomized controlled
trial. BMC.Med Educ. 2004;4:30.
2. Song F, Altman DG, Glenny AM, Deeks JJ. Validity of indirect
comparison for estimating efficacy of competing interventions: empirical
evidence from published meta-analyses. BMJ 2003;326:472.
3. ter Riet G, Kleijnen J, Knipschild P. Acupuncture and chronic
pain: a criteria-based meta-analysis. J Clin Epidemiol 1990;43:1191-9.
4. Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L et
al. Changing provider behavior: an overview of systematic reviews of
interventions. Med Care 2001;39:II2-45.
5. McGinn T, Seltz M, Korenstein D. A method for real-time, evidence
-based general medical attending rounds. Acad Med 2002;77:1150-2.
6. Bradley DR, Rana GK, Martin PW, Schumacher RE. Real-time,
evidence-based medicine instruction: a randomized controlled trial in a
neonatal intensive care unit. J Med Libr.Assoc. 2002;90:194-201.
Corrected Figure 1 (for BMJ 2004;329:1017): Changes in knowledge,
skills, attitude, and behaviour after critical appraisal skills or EBM
teaching, grouped by quality of studies. Data presented as 100% stacked
bar chart with numbers inside bars indicating number of studies that
provided information for a particular outcome.
Competing interests:
Both AC and KSK have funding from West Midlands Deanery and European Union to promote Evidence-based Medicine.
Competing interests: No competing interests
Sirs,
The review by Coomarasamy and Khan [1] assessing postgraduate
teaching in evidence based medicine (EBM) addresses a very important
topic. However, we are concerned that the paper appears to overstate its
conclusions and is potentially misleading. The authors stated that "while
standalone teaching and integrated teaching are both effective in
improving the knowledge base, it is clinically integrated teaching of EBM
that is likely to bring about changes in skills, attitudes and behaviour."
We have a number of concerns with the methods and reporting of this
review. As Pozo [2] noted, there are no direct head-to-head comparisons
between the two methods of teaching EBM. It is important to be circumspect
about making any firm conclusions on the basis of indirect comparisons
because there may be important confounding differences between the
participants, content of the intervention (beyond the setting of the
intervention) and study settings. Further, the authors appeared to have
used some form of vote counting technique; however, it is unclear how the
authors determined whether a study showed improvement or no change.
More worryingly, the authors appeared to have incorrectly identified
the study design of at least one included study. The authors claimed to
have identified two randomised trials evaluating integrated teaching. In
fact, one of the studies (McGinn et al [3]) is not a randomised trial.
According to the original paper, "two randomly selected teams of residents
and medical students in the internal medicine program" participated in the
EBM program. However, the paper did not suggest that there was a
randomised control group or present any control group data suggesting that
this was a non randomised study. Further, the other randomised trial of
integrated teaching by Bradley and colleagues [4] only included 10
residents on a rotation in a neonatal intensive care unit. Whilst this
appears to be a well conducted trial, it needs to be replicated with a
larger number of residents across a wider range of settings before one
could conclude that integrated teaching was likely to be generally
effective. Although the authors explained theoretically why integrated
teaching should be more effective than standalone courses, this conclusion
is not substantiated or refuted by the available clinical literature.
The authors concluded that "Teaching of evidence based medicine
should be moved from the classroom to clinical practice to achieve
improvements in substantial outcomes." We do not believe that the evidence
presented in this review supports this conclusion. Larger randomised
trials evaluating the effectiveness of integrated teaching of EBM and
randomised trials comparing integrated teaching and standalone courses are
required before reaching conclusions about this issue.
1. Coomarasamy A, Khan KS. What is the evidence that postgraduate
teaching in evidence based medicine changes anything? A systematic review.
BMJ 2004;329:1017.
2. Pozo, AL. Not a true comparison of teaching methods? (10 November
2004) eLetter re: Coomarasamy A, Khan KS. What is the evidence that
postgraduate teaching in evidence based medicine changes anything? A
systematic review. BMJ 2004;329:1017.
3. McGinn T, Seltz M, Korenstein D. A method for real-time, evidence-
based general medical attending rounds. Acad Med 2002;77:1150-2.
4. Bradley DR, Rana GK, Martin PW, Schumacher RE. Real-time, evidence
-based medicine instruction: a randomized controlled trial in a neonatal
intensive care unit. J Med Libr Assoc 2002;90:194-201.
Competing interests:
Jeremy Grimshaw is the Co-ordinating Editor and Alain Mayhew is the Review Group Co-ordinator of the Cochrane Effective Practice and Organisation of Care (EPOC) group. EPOC has a review of the effects of critical appraisal training. Jeremy Grimshaw is a co-author of one of the non randomised studies of "standalone teaching" included in the review.
Competing interests: No competing interests
Coomarasamy and Khan’s review suggests that the best place to learn
critical appraisal skills and evidence-based medicine is in a clinically
relevant setting [1]. This should be no surprise: it is widely accepted
that clinically integrated teaching helps students to consolidate
knowledge and practise skills.
Caution is warranted however, before making every effort to move EBM
teaching “from classrooms to clinical practice.” On examining the
evidence, none of the included studies compared a standalone teaching
method with a clinically integrated one; all compared the effects of one
intervention with a control group or baseline before teaching [2]. While
some interventions produced greater improvements in knowledge, critical
appraisal skills, attitudes or behaviour, it requires some extrapolation
to assume that relative success resulted primarily from the method being
“clinically integrated”.
Measures of learning achievements differed between studies, many
being subjective (self-assessment). Each set of results will have been
influenced by numerous other factors specific to the population and
intervention, such as student motivation and teacher enthusiasm. The
sample sizes and statistical significance of “improvements” were also
omitted, making it difficult to compare study quality or impact.
Outcomes of educational programmes depend on so many variables that
randomised controlled trials are vital to compare interventions fairly,
although (as the authors discuss) these are unfeasible in some settings.
Answers to the question asked in the title of the paper are available
nonetheless: there is encouraging evidence that postgraduate teaching in
EBM does have measurable benefits.
The relative merits of “standalone” and “integrated” teaching depend
on the time and resources available to students and teachers. Classroom
teaching may be more appropriate in some circumstances, particularly since
clinical time is so precious. The most successful teaching programmes
will ultimately be those tailored to their students’ learning needs. This
may well be achieved best through local strategies (feedback and audit),
rather than relying on extrapolation from research.
[1] Coomarasamy A, Khan KS. What is the evidence that postgraduate
teaching in evidence based medicine changes anything? A systematic review.
BMJ 2004;329:1017-1019.
[2] Full version of [1] on bmj.com
Competing interests:
None declared
Competing interests: No competing interests
This review confirms what many have always known or suspected:
Teaching and learning on the job is better than the class-room model.
Although this evidence is specific to EBM and, in particular,
postgraduates, there is every reason to believe that the findings would be
generalisable to other fields and other target audience also. This would
suggest that there should be a sustained effort to move learning from
"classrooms to clinics" for every clinical topic (not just EBM) - although
this may require greater resources, the outcomes are likely to be worth
it. In a climate where educational events appear to be proliferating,
medical teachers (and students!) need to stop and ask if the way they
teach and learn is the most effective way.
It should be noted, however, that classroom teaching is not
"ineffective" - It simply does not change attitudes and behavior. It does
indeed improve knowledge and that in itself may be a worthwhile outcome
(and a starting point for changing attitudes and behavior). Moreover,
classroom model has its own advantages such as allowing networking between
collegues. Thus the issue is arriving at the right balance between class-
room teaching and clinically-integrated teaching, and appreciating what
each has to offer.
Competing interests:
None declared
Competing interests: No competing interests
Sirs,
I agree completely with the wise conclusion of this enlightening
article, wherein the authors state: “Teachers of critical appraisal and EBM
should aim to bring teaching out of classrooms into the clinic, but this
will require a greater effort” (1). Certainly, doctor needs reinforcement
of theoretical EBM knowledge in day-to-day practice, to avoid that even
the modest knowledge gains from such courses will deteriorate over time.
However, in my opinion based on 47-year-long clinical experience,such
teaching and learning (EBM) , although integrated into routine practice,
“in general” are not able to bring greater benefits, as we may see
clearly, e.g., as regards the paramount expence of money as well as the
avoidable anxiety of patients waiting for a long time for laboratory and image
departments often unuseful results.
In addition, we must remember,
or fortunately know, that not all patients must undergo well-defined
investigations, since, even overlooked, there exsist biophysical semeiotic
constitutions and consequently the Single Patient Based Medicine: e.g.,
individuals without diabetic “and” dyslipidemic biophysical semeiotic
constitutions will never be affected by type 2 diabetes and/or
dyslipidemia (2-5) (See NONCode web-site 233736,
www.semeioticabiofisica.it: Constitutions. SPBM). Therefore, we should
decide to consider not only EBM, but also, as I have been suggesting over the
last decade, unheeded of course, for the first time on BMJ.com
(http://bmj.bmjjournals.com/cgi/eletters/328/7450/1213,
http://bmj.bmjjournals.com/cgi/eletters?lookup=by_date&days=1#62494,
http://bmj.bmjjournals.com/cgi/eletters?lookup=by_date&days=1#57417,
http://bmj.bmjjournals.com/cgi/eletters/328/7455/1529#64770,a.s.o.,as well
as on EU Health Authority website:
http://www.google.it/search?q=cache:U5A
-tWmRDsJ:europa.eu.int/comm/health/ph_information/documents
/ev_20030710_co01_en.pdf+single+patient+based+medicine+and+
stagnaro&hl=it&ie=UTF-8)
, SPBM, i.e., “Single Patient Based Medicine”(3). In fact, although
unfortunately overlooked all around the world, Biophysical Semeiotics
Constitutions do really exist. Interestingly, e.g., in individuals
without “Hypertensive Constitution” arterial hypertension will surely not
occur, even in lasting presence of environmental risk factors, and under
whatever unfavorable lifestyle. In a few words, thanks to SPBM, we can
nowadays easily recognize at the bed-side subjects with particular
constitution(s) (e.g. Oncological Terrain), who can be involved by defined
diseases under environmental risk factors.
Finally, there is convincing
evidence that cigarette smoking is a risk factor for type 2 diabetes.
Really, cigarette smoking has been consistently associated with a
relatively small but significantly increased risk of type 2 diabetes in
both men and women (2,5). However, only individuals with both diabetic
“and”dyslipidemic biophysical-semeiotic constitutions can suffer from
diabetes mellitus type 2, as allows me to state with a 46-year-long clinical
experience (4, 5).
1) Coomarasamy A., Khan KS.What is the evidence that postgraduate
teaching in evidence based medicine changes anything? A systematic review.
BMJ 2004;329:1017 (30 October), doi:10.1136/bmj.329.7473.1017
2) Stagnaro Sergio, Stagnaro-Neri Marina. Introduzione alla
Semeiotica Biofisica. Il Terreno oncologico”. Travel Factory SRL., Roma,
2004. http://www.travelfactory.it/semeiotica_biofisica.htm
3) Stagnaro Sergio, Stagnaro-Neri Marina. Single Patient Based Medicine.
Travel Factory SRL., Roma,in press.
4) Stagnaro S., Stagnaro-Neri M., Le Costituzioni Semeiotico-
Biofisiche.Strumento clinico fondamentale per la prevenzione primaria e la
definizione della Single Patient Based Medicine. Ediz. Travel Factory,
Roma, 2004.
5) Stagnaro S., Diet and Risk of Type 2 Diabetes. N Engl J Med. 2002 Jan
24;346(4):297-298. letter [PubMed –indexed for MEDLINE].
6) Stagnaro S., Stagnaro-Neri M. Valutazione percusso-ascoltatoria del
Diabete Mellito. Aspetti teorici e pratici. Epat. 32, 131, 1986.
Competing interests:
None declared
Competing interests: No competing interests
Some methodological defects
Dear Sir
The conclusions of the Coomarasamy and Khan’s review (1) are
rational; but this study suffers from some methodological defects that
threatened its validity:
Publication Bias
A recent study estimated that 56% of meta-analyses had at least one
study missing (2). In most cases publication bias does not affect the
conclusions. Publication bias is more likely to affect small studies than
large ones and the review's conclusions are altered (3).
Coomarasamy and Khan’s review has small sample size; hence the conclusions
are prone to publication bias. A simple sensitivity analysis using two by
two comparisons of Knowledge, skills, attitude and behavior in integrated
and standalone groups using Fisher’s Exact test showed that by adding one
“no change” article to integrated group the results will change to non
significant in all comparisons; Although investigators didn’t make
strenuous efforts to find unpublished studies and reduce publication bias.
Simple Vote Counting
Simple vote counting may be misleading. It ignores the sample sizes,
the magnitude of the effects in the constituent studies and the validity
of their design. Therefore In general, vote counting should be avoided
(4). Instead, many systematic reviews use qualitative methods for
comparing the studies in the presence of heterogeneity. A qualitative
analysis consists of using various levels of evidence regarding the
effectiveness of a treatment, taking into account the participants,
interventions, controls, outcomes, and methodological quality of the
original studies. The same structured approach should be applied to a
qualitative analysis as is applied to a quantitative analysis (5).
To undertake vote counting properly the number of studies showing harm
should be compared with the number showing benefit, regardless of the
statistical significance or size of their results (4). But In the present
study the researchers compared the studies reporting “improvement” with
the studies reported “no change” and if the term “no change” means “non
significant results” this comparison is deceptive.
Indirect comparisons
There is not any single study comparing directly “Integrated” versus
“Standalone” teaching methods and all the trials compared one of the above
methods with other conventional controls. The authors utilized these
indirect comparisons as an evidence for the greater effectiveness of
“Integrated” teaching method. Direct comparison of the relevant single
arms of the trials should never be used (4). This comparison ignores the
potential benefits of randomization and suffers from the same (usually
extreme) biases as a comparison of independent cohort studies.
1.Coomarasamy A, Khan KS. What is the evidence that postgraduate
teaching in evidence based medicine changes anything? A systematic review.
BMJ 2004; 329:1017-1019.
2.Sutton AJ, Duval SJ, Tweedie RL, Abrams KR, Jones DR. Empirical
assessment of effect of publication bias on meta-analyses. BMJ 2000; 320:
1574-1577
3.Jonathan A C Sterne, Matthias Egger, and George Davey Smith.
Systematic reviews in health care: Investigating and dealing with
publication and other biases in meta-analysis. BMJ, Jul 2001; 323: 101 -
105.
4.Alderson P, Green S, Higgins JPT, editors. Cochrane Reviewers’
Handbook 4.2.2 [updated March 2004].
http://www.cochrane.org/resources/handbook/hbook.htm.
5.Van Tulder M W,Furlan A ,Bombardier C,Bouter L M, the Editorial
Board of the Cochrane Collaboration Back Review Group. Updated Method
Guidelines for Systematic Reviews in the Cochrane Collaboration Back
Review Group. Spine 2003; 28:1290-99.
Competing interests:
None declared
Competing interests: No competing interests