How to Formulate a search question
Hierarchy of scientific evidence
Link to the Research Methods page
Return to Epidemiology & Research Methods theme page
Return to Welcome page
Vers la page française
Medical science evolves quickly, so the training that you receive in medical school will be outdated – sometimes in crucial areas – by the time you are in practice. So, rather than base medical practice on what you were taught in medical school (which will soon be many years ago!), the idea is to base it on current medical research and knowledge. A podcast on 'Bad Medicine' outlines the reasons why evidence-based medicine has become so central. "We know that we know far less than we don't know."
Evidence-Based Medicine (EBM): "The conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients." –David Sackett.
EBM involves integrating the best available evidence with your clinical expertise, while also bearing in mind the patient's beliefs and values:
As a clinician you will be expected to be aware of current research findings in your area of specialization. Patients may also present you with research they have found—which may vary from peer-reviewed journals to something they found on an Internet chat forum. You can neither arbitrarily accept nor dismiss all research done since you left medical school. Instead, you must learn to critically review journal articles and understand what the new results are saying.
Using Medical Journals in EBM
Medical journals are important because:
- Findings are published in them first;
- The studies have been peer reviewed by experts in the field;
- The level of detail in reports is generally sufficient to judge their validity and applicability to your practice;
- Journal articles are available electronically.
- Clinical journals (Lancet; New England Journal of Medicine; British Medical Journal, etc.) publish articles relevant to treating patients, so will be your primary source of information, along with specialist journals in your field of practice. However, the new articles may conflict with established practice, so you have to be cautious about altering your clinical practice to follow what may later turn out to be an incorrect conclusion.
All of this underlies the importance of learning to be an informed consumer of medical research.
On the Limits of EBM
The EBM movement has become widespread and influential, but is not without its detractors. You should be aware of some of the criticisms of EBM:
- Biological variability among people means that there is no guarantee that the average results obtained in a large study will apply to the patient in front of you.
- Therefore, you can never rely rigidly on the evidence-base in treating a patient; this is not a mechanical process. Undue reliance on published evidence may sap your attention to the evidence the patient is supplying: consider what they are telling you and their uniqueness. There is always a need for judgment in how completely the "evidence" fits this particular case.
- However, you are caught between your clinical judgment and the threat of lawsuits if things go wrong. You are better protected if you have at least considered applying the current standard of care in this instance.
- RCTs are usually undertaken in controlled settings; when the results are applied to complex patients with multiple presenting problems, the applicability of the evidence may be less clear. The intervention tested in an RCT comprises both the treatment and a series of additional procedures, ranging from preliminary selection criteria and measurement protocols to the signing of consent forms, none of which is included in the eventual therapeutic application.
- Although EBM is based on belief in the RCT, there is no RCT to prove the contribution of EBM itself: it is an article of faith.
A great way to waste time in searching for relevant information is to be unclear about your clinical question; make it as precise as possible.
The PICO format is recommended. Your clinical question should specify:
- Population (e.g. post-menopausal women);
- Intervention (e.g. estrogen replacement therapy);
- Comparison (e.g. compared to no estrogen replacement);
- Outcome (e.g. the risks for breast cancer and osteoporosis).
Hence, a PICO-formatted question might ask: “Is there an elevated risk of osteoporosis and/or breast cancer among postmenopausal women who are taking estrogen supplementation, compared to women not on estrogen therapy?”
This is not to say one should type this full question into a database like PubMed; the question has to be translated using Boolean search terms and key words. Nonetheless, you should begin with a precise question and ensure, by the end of your research, that all of the PICO elements have been answered.
Critical Appraisal of Literature & the Hierarchy of Evidence
A systematic process by which clinicians judge the validity and usefulness of information that may guide them in addressing a clinical problem. Typically, critical appraisal refers to literature, but you can also critically appraise other sources of information (e.g. your professors).
Critical appraisal begins with the four “A”s. As a clinician, you will have to acquire the relevant information; journal articles will have to be available to you (through the Internet or via a hospital or university library). You will then have to assess or judge the article(s). Not all are equally good—hence the critical appraisal theme. Next, you may need to adapt the findings to this particular patient; perhaps certain sub-analyses in the article may be more relevant than the main finding. Finally, you will apply the findings.
Hierarchy of Evidence
Critical appraisal implies that some sources of evidence are more valid than others (here, 'valid' refers to evidence that is free from bias).
There is general agreement that a well-designed empirical study should carry more weight than personal opinion in choosing a therapeutic approach. But empirical studies were not all born equal. Here is a hierarchy, in descending order, of study designs in terms of their potential to yield high-quality evidence. 'Potential' is included because a good study design can still be poorly executed.
- Meta-analyses of clinical trials (e.g. the Cochrane reports, previously known as the Cochrane Collaboration)
- High-quality systematic reviews or expert consensus statements (e.g. UptoDate, etc.)
- Other clinical trials (quasi-experimental designs)
- Prospective cohort studies
- Historical cohort studies
- Non-systematic reviews
- Case studies
If you would like a refresher on study designs and their strengths and weaknesses, click here to go to the Study Designs page.
Systematic reviews: Results from randomized clinical trials (RCT) are the cornerstone for guiding clinical practice. But one trial is rarely sufficient, so guidelines for best practices are ideally based on systematic reviews of the results of several trials. Systematic reviews undertake a thorough search of the entire literature on a topic and then provide a narrative summary of the results, taking account of the quality of each study. Sometimes it is possible to combine the results from several studies numerically in a meta-analysis, forming a pooled estimate from all of the studies, for example to show the effectiveness of a drug for a particular purpose. Cochrane has been set up as an international effort to assemble the results of therapeutic trials, and regularly publishes reviews of best practice in virtually all areas of medicine. Cochrane contains hundreds of reviews, and each review can cover hundreds of individual articles.
Critical Appraisal Biases
Several biases may develop in the process of critical appraisal; be aware and avoid them!
Search-satisfying error: Occurs when a doctor stops looking for an answer to a patient's problem as soon as he/she discovers a finding that satisfies him/her, albeit incorrectly.
Availability error: Occurs when a doctor considers only the information that is uppermost in his/her mind.
Confirmation bias: Occurs when a doctor considers only the part of the information available to him/her that confirms his/her initial judgment of what is wrong.
Diagnostic momentum: Occurs when a doctor is unable to change his/her mind about a diagnosis, even though uncertainty should remain.
Commission bias: Occurs when a doctor prefers to do something rather than nothing, irrespective of clinical clues suggesting that he/she should sit on his/her hands.
Attribution error: Seeing the patient from only one angle, sometimes a negative one.
Link to Cochrane
Treadwell, J. R. et al. (2006). A system for rating the stability and strength of medical evidence, BMC Medical Research Methodology, 6:52. (link)
Humour Corner: some evidence-based advice
"You can lead a man to the data, but you can't make him think."
Tucker's comment: It makes sense when you don't think about it.
A formula for the usefulness of information by Slawson & Shaughnessy
(Br Med J 1997;314:947-949):
Usefulness = Relevance of the information x Validity of the information / Work needed to access it.
For example, traditional journal articles may be valid, but are rarely relevant to the practitioner and can be hard to read, so are not very useful. By contrast, talking to an experienced colleague often gives you the relevant information, they are easy to access, but validity may be questioned (so, higher usefulness). Text books score moderately on relevancy, but rarely answer direct questions; they are fairly easy to access but validity may be reduced if they are out of date (Reference: Richard Smith, Br Med J 2002;325:983)
Updated November 21, 2018