Why doctors don't read research papers
BMJ 2004; 329 doi: https://doi.org/10.1136/bmj.329.7479.1411-a (Published 09 December 2004) Cite this as: BMJ 2004;329:1411All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
As a doctor currently doing an MSc in Epidemiology, the statistics that I'm learning feels like first year university mathematics undergraduate level. Doctors in clinical practice can not be expected to understand this level of math. I've always felt that a truely bright professor or teacher is one that makes a complicated subject simple and does not merely blind with science. There is certainly a place for simpler writing, and perhaps an editorial comment on each paper to explain the stats. Having studied so many weak papers on our course that were accepted by journals such as New England Journal of Medicine, British Medical Journal and the Lancet, perhaps these high-profile journals should have more epidemiological or statistical critiques of their contents so busy clinicians do not have to take time out of their busy schedules to learn about likelihood theory in logistical regression...
Competing interests: None declared
Competing interests: No competing interests
Doctors don't read research papers, YES, and I don’t blame them! In response to the review article by Kevin Barraclough [1].
Though we would like our medical practitioners to have the benefit of current and updated knowledge, the wealth of information and its exponential growth with technology is overwhelming!
Most experienced doctors know which medical practices are prone to modifications and changes. These doctors should form a district or countrywide forum and have regular meetings including journal club activities to update their knowledge and share the burden of information. An electronic Q&A session would serve the purpose more effectively, many websites are already marching towards this goal including BMJlearning [2] and WebMD [3] to name a few. Apart from these, the collection of knowledge-based information from pubmed, medline and other literature databases can be better accessed by search engine tools like Google especially its new search modules [4], graphical search tools like Grokker [5] and search organizers like ClusterMed [6]. These tools help the users to easily navigate the extensive inventory of medical research articles by organizing the search results into categories.
Overall, the concern that the doctors don’t read research papers is a valid one. Its our responsibility to help them organize the overwhelming information and provide them the organized essence of information and allure them to read more and update with current methods and practices.
[1] Barraclough, K. Why doctors don't read research papers BMJ 2004; 329: 1411-a
[2] BMJ learning website URL: http://www.bmjlearning.com/
[3] WebMD website URL: http://www.webmd.com/
[4] Google Scholar website URL: http://scholar.google.com/
[5] Grokker website URL: http://www.grokker.com/
[6] ClusterMed website URL: http://clustermed.info/
Competing interests: None declared
Competing interests: No competing interests
My sympathy lies, as it usually does, with Kevin Barraclough.
I have suggested elsewhere [1] that the reader-deterring style in which most scientific papers are written has evolved because they are written not to be read but to be published. Authors are eager to get their names in print not because they are bursting to tell us something but for more solemn reasons. Another paper means another line on a CV, another step towards a job or a research grant.
In 1976 in the Lancet we missed one of the great opportunities of 20th century medicine when Dr. J B Healy, like another Irishman 250 years before him, submitted a modest proposal. [2]
“It seems to me that we should for an experimental period of a year, declare a moratorium on the appending of authors’ names and of the names of hospitals to articles in medical journals. If the dissemination of information is the reason why papers are submitted for publication, there will be no falling off in the numbers offered. … But if far less material is offered to the journals, we shall have unmasked ourselves.”
No editor has yet been brave enough to conduct that experiment. Not even Richard Smith who when editor of the BMJ said that only 5 per cent of published papers reached minimum standards of scientific soundness and clinical relevance and in most journals the figure was less than 1 per cent.[3]
The reluctance to take up Dr Healy’s suggestion confirms the observation of the editor of Nature that scientific papers serve the needs of their authors above those of their readers.[4] Why else would a journal devote five pages to a paper that reached this conclusion? “In this pilot study, the null hypothesis that both treatments will show equal results cannot be confirmed or rejected because of the small number of participants." [5]
We need to exorcise the myth that, to write readably about science, authors have to write superficially or grossly simplify their subject. The real challenge is to present complexity in an understandable way. Anyone who has tried to do it knows that it is hard work. The writers of too many scientific papers are not prepared to make the effort
Michael O'Donnell
michael@odonnell99.freeserve.co.uk
1. O’Donnell M. Evidence-based illiteracy: time to rescue “the literature”. Lancet 2000;355:489-91
2. Healy J B. Why do you write? Lancet 1976;1:204
3. Boseley S. News report. The Guardian, 24 June 1998
4. Maddox J. Quoted in: Communicating science: A handbook. London: Longman,1991:51
5. HJ Lamers, R H Jamin, Zaat JO, et al. Dietary advice for acute diarrhoea in general practice: a pilot study. Br J Gen Pract 1998;48:1819- 1823
Competing interests: None declared
Competing interests: No competing interests
Dr Barraclough's article evoked some interesting responses, but it was rather surprising one important aspect seems to have been overlooked.The need for statistics to be presented in the way they are is usually to maintain the uniformity of understanding.We cannot have different researchers coming out with vague statistics just to proove their point.The thumb rule seems to be if the statistics seem dodgy then it probably is!
The beauty of statistics is that an avid reseacher can present data to suit his conveinience for or against an argument but if the protocol is to accept results in standard format there is less leeway for spurious research.This is usually ensured by editorial staff of most of the peer reviwed journals.
To be non-conversant with basic statistical measures to draw conclusions to influence your practice will in my opinion amount to negligence.There are a variety of ways in which today's clinician can keep abreast of cutting edge research and there are agencies out there to present them on a platter if so called "clinical exisgencies" forbid active pursuit of excellence.
Competing interests: fledgling research
Competing interests: No competing interests
Doctors please read new science and form your own opinions. Show care and give respect to patients that research and can diagnose their health problems. My experience... http://www.alkalizeforhealth.net/Lnaturalgas.htm
Competing interests: Read new scientific data
Competing interests: No competing interests
I was struck by the number of comments about statistics in the responses to this piece. But there is a missing element in the debate that deserves to be raised.
It seems obvious that we can't do without statistics, but current practice seems to obfuscate truth rather than highlighting it. Could it be the way we teach and report statistical analysis that is the problem?
Gerd Gigerenzer argued in his superb book "Reckoning With Risk" (Penguin 2003) that the way we choose to present statistical results makes a great deal of difference to how well the results are understood even by numerate professionals. The point seems obvious, but current practice is just about the opposite required for good comprehension.
If my memory is reliable (I've lent out my copy of the book) Gigerenzer demonstrated his point by doing an experiment with GPs giving advice about the results of AIDS tests (they had to advise patients about a positive result given the various statistics of AIDS prevalence and test reliability). The relevant statistics were presented in two alternative ways (the basic difference relating to the use of percentages or using actual numbers). The advice given by the first group (using the traditional presentation of the relevant statistics) was often wrong; the alternative group (using the alternative presentation of the same data) mostly gave the right advice. (The stats imply that about 50% of positive tests in low risk groups will be false positives, hence the importance of appropriate advice).
The bottom line is that the form of presentation is more important for comprehension than the underlying numbers. Unfortunately the stats experts are often poor communicators and do not present their results in ways that are unambiguous and readily understood. Statistics teaching (especially to non-experts) is often poor and almost never seems to devote any time to clear presentation.
Perhaps we could make papers easier to read not by leaving out the stats, but by bringing in authors who know how to present them.
Competing interests: None declared
Competing interests: No competing interests
...but relying on intuition or experience, or a 'sense' of what may be wrong (re r.r. below) may not be in the interests of the person with the problem. Whatever the personal bias of practitioner it needs to match the preference of differeent individuals. eg
The person may not wish to have a health worker involved in discussions about 'lifestyles', however interested a practitioner may be in this part of their work,he/she may be someone who prefers to be given information on which to make more private decisions or discuss them elsewhere.
Not only do researchers have a stated or unstated bias as K. B. says but the type of research accessed by practitioners and people who consult them can also be biased, rather than, as the respondent above says, and many people who have had serious conditions misdiagnosed would prefer, that at least summaries of a wide section ofupdated information should be accessed and in some way specific knowledge be tested.
It would be useful if those with a special interest in psychological and social medicine were also required to show they are keeping up with all relevant research ,as many who suffer from conditions such as M.S., 'irritable bowel', keep pointing out.
Trouble is those with much to teach others are still not included in genuine decision making about funding, do not sit on committees which influence decisions or have access to politicians who need to hear messages which suit their agenda. At present social/psychological medicine is popular in some areas because it fuels interest/anxiety, provides work for thousands who have trained in the myriads of courses available , as well as sometimes actually addressing some of the social problems as identified by citizens themselves. The input of the medical profession though, has made very little difference to the poor health of people in deprived areas such as the South Wales Valleys where the concept of Social medicine first began in the UK. half a century ago.
Competing interests: None declared
Competing interests: No competing interests
Doctors should read scientific journal papers that pertain to their area of practice. For example, doctors who work with cancer patients should keep up on the latest publications that deal with cancer. With more and more people with cancer are using the Internet for information about alternative treatments or treatments for themselves can cause problems. This way of going about in searching for information can cause the reader to be misguided or misinterpret the data that is presented in the research paper(s). Because there is cancer research going on all over the world, one should not depend on just reading one paper in isolation but rather should look at the whole picture and read more then one article that deals with the same subject that deals with any particular type of cancer. If a doctor reads just one article that deals with statistical information that shows that treatment type Z is better then type Y and does not read any more articles, he or she most likely will not learn that another study showed that treatment type Z is not better then type Y can cause the doctor to practice bad medicine
Competing interests: None declared
Competing interests: No competing interests
We consider statistics as a general skill/knowledge for people with university education who analyze and interpret data, not only in their professional field but also in common life. As a medical doctor by my background I encourage everybody to learn a little more about statistics and to enjoy understanding it.
Competing interests: medical doctor + researcher
Competing interests: No competing interests
Perhaps this points to a gap in medical education
I wonder whether medical schools should provide (and require) more sophisticated training in research design and analysis--perhaps as a substitute for currently required courses in organic chemistry. Which is more important for physicians who want to read journal articles with a critical eye and thereby enable themselves to practice evidence-based medicine?
Competing interests: I'm a Ph.D., not an M.D. Feel free to display my comments in a smaller font than everyone else's.
Competing interests: No competing interests