Advancing excellence in laboratory medicine for better healthcare worldwide

SEARCHING FOR THE EVIDENCE IN BUDAPEST YOUR PROBLEMS MAY HAVE ALREADY BEEN SOLVED


SEARCHING FOR THE EVIDENCE IN BUDAPEST: YOUR
PROBLEMS MAY HAVE ALREADY BEEN SOLVED

SEARCHING FOR THE EVIDENCE IN BUDAPEST: YOUR PROBLEMS MAY HAVE ALREADY BEEN SOLVED

Contributed by Chris Florkowski, Canterbury Health Laboratories, Christchurch, NZ and John Whitfield, Royal Prince Alfred Hospital, Sydney

Editor's Note: This article previously appeared in The Clinical Biochemist Newsletter, No 160, December 2005, Editor: Sandra Klingberg; Publisher: Australasian Association of Clinical Biochemists

Budapest was the venue for the 4 day Workshop on Evidence-Based Laboratory Medicine (EBLM) attended by 50 delegates from 27 countries around the world, including the authors who are both corresponding members of the IFCC committee on EBLM. The Principal Coordinator of the Workshop was Professor Rita Horvath (Szeged, Hungary), supported by a global panel of "niche" experts in the component skills of EBLM.


Background to EBLM:

Rob Christenson (Baltimore, USA) set the scene by discussing the meaning and context of EBLM. Having evolved through several definitions, EBLM may be regarded as Evidence Based Medicine with a Laboratory focus and is about integrating best available evidence with personal experience and patient values. On the one hand, it is a sophisticated tool for addressing the big questions facing the Laboratory, namely the value and Evidence base for Outcomes that justify whether there is a case for offering a particular test or not as a clinical service. In its more expedient ("quick and dirty") form, it is a way of addressing everyday questions that Health Professionals are confronted with. Paul Glasziou (Centre for Evidence Based Medicine, Oxford) related how the "Evidence Cart", a portable on-line computer has been used on ward rounds in Oxford to address patient focused problems. He also related how local Journal Clubs are about using EBM principles to address specific questions and what a popular format this is. This approach is all the more pertinent given the explosion of new knowledge with an estimated 1,500 new articles appearing per day on MEDLINE that Health Professionals cannot be reasonably be expected to keep up with. Paul, incidentally will be known to many as the former Professor of Evidence-Based Practice at the University of Queensland. Intriguingly, Rita Horvath likened EBLM unto the Chain Bridge of Budapest, metaphorically linking old information, tradition and experience (the old town of Buda) with new knowledge from systematic research (the new town of Pest).

Overview of EBLM Workshop:

The Workshop used a Lecture format, supplemented by small groups sessions (8-10 people), supported by facilitators, working through specific examples to consolidate the theme of the day. The key skills of EBLM are in firstly formulating the right question, then searching for and critically appraising the Evidence. Systematic reviews and meta-analyses require a higher level of skill to execute, though knowing how to find and access them provides a ready, valuable source of key information. Few systematic reviews address Laboratory themes compared with other clinical topics. Remarkably, few clinicians have ever accessed systematic reviews and even fewer have utilised one. One of the key lessons of the Workshop was that one might be very surprised and find that a systematic review of particular topic of interest has already been done. A lot of work may be saved and valuable information gleaned. It is a matter of knowing how and where to look. The next level is to apply the Evidence in a practical way, either focused on everyday issues or in formulating Guidelines. Being able to access and critically appraise Guidelines are key skills of EBLM.

Day One; formulating the Question:

The first day focused on setting the right question. In its most expedient form, the acronym "PICO" provides the guidance and sets the template for subsequently searching the literature. This is the faculty of defining the P=Patient Group (eg patients with suspected hyperkalaemia), the I=Intervention (or test, in this instance an abnormal ECG with peaked T waves), the C=Comparator (a normal ECG) and the O=Outcome (K+ >6.5 mmol/L). In other words, do peaked T waves on an ECG reliably identify patients with potassium levels above 6.5 mmol/L ? Subsequent small group sessions worked on coaching delegates to formulate the appropriate question in other scenarios using the "PICO" format and then to design appropriate Study Designs to address that question.

Appropriate Study Design is a crucial element that was addressed by Patrick Bossuyt (Amsterdam, Netherlands). For a diagnostic test evaluation, the optimal study design is a prospective blinded comparison of the experimental test and the reference test ("gold standard") in a consecutive series of patients from a relevant, well defined population. Patrick also developed this further on day two, illustrating how failings in areas of study design, may lead to an over-estimation of diagnostic accuracy. For example, Spectrum Bias can be introduced when a test is evaluated in a study population, which is not representative. The classic example is the "diagnostic case-control study" where groups of patients with known disease are compared with individuals known to be free of the disease. Another example is partial verification bias where not all patients with negative results of the experimental test are subjected to the reference test. Some patients are either excluded or considered as true negatives, which may lead to an over-estimation of sensitivity and specificity. The one key reference that addresses these issues is Lijmer et al (JAMA 1999; 282: 1061-6). All these elements have been incorporated into the Standards for Reporting of Diagnostic Accuracy (STARD criteria), a checklist of items that should be included in the report of a study of diagnostic accuracy. These have been implemented by many mainstream Clinical Chemistry and other journals with a favourable impact on the quality of submitted research. It should be noted, however that what is often perceived as the Holy Grail of EBM, namely the randomised controlled trial (RCT) is not necessarily always the most appropriate study design. EBM rather is more about fitting the most appropriate study design to the question in hand.

Day Two; Searching for the Evidence:

The first initiative test was a Metro ride across Budapest to the Health Insurance Fund Building. This day was spent predominantly in a Computer Laboratory being coached by Paul Glasziou and others in the subtleties of searching the literature, in particular practical tips on using Pubmed.

Daniel Pewsner (Bern, Switzerland) set the scene by giving practical advice on search strategies. On the one hand, the first advice is to be comprehensive. "You only find gold when you wash a lot of sand". The price of comprehensiveness, however is in generating a large number of "hits" that may not necessarily be relevant. Having said that, professional systematic reviewers would not baulk at this approach nor the prospect of screening over 1000 articles or abstracts for relevance. Danny stressed the importance of getting help from an experienced librarian, tracking references in key publications and contacting authors of retrieved articles. By corollary, the key "don'ts" are searching the english language only, not using EMBASE and searching without ongoing exhaustive monitoring. The "PICO" format expounded on day one enables the key terms to be more clearly defined for searching. The most favoured portal is Pubmed (www.pubmed.com), a service of the National Library of Medicine (NLM) in the USA and with access to over 15 million citations to the biomedical literature. Using MeSH (Medical Subject Headings) offers a hierarchy of subheadings that enable more focused searching. Different MeSH headings and subheadings can be combined with Boolean operators (such as AND and OR) for further refinement. Bearing in mind that there is no such indexing term as "diagnostic studies", the "Clinical Query" filter, channelled through the box "diagnosis" is a useful adjunct. This so-called "Haynes" filter refines the search though with some loss of sensitivity.

Once found, the literature needs to be critically appraised. The small group activity focused on literature review, with critical discussion of a paper on the utility of BNP in a primary care setting; Landray MJ et al (BMJ 2000; 320: 985-6). The critical appraisal tool STEP (Screening and Test Evaluation Program) developed at the University of Sydney was used for the exercise. Although this is probably a better study than the publication reveals, it was clearly written before the STARD criteria became the norm and may have had problems in being accepted now for publication in its presented form.

Day Three: Data Collation and Analysis:

Jon Deeks (Oxford, UK) gave a practical overview of applied Bayes theory; sensitivity, specificity, ROC curves, likelihood and odds ratios. The small group activity focused on working through calculations. Delegates picked up useful tips on the value of 2 X 2 Tables, for example converting absolute values in the Table to percentages in order to calculate Likelihood Ratios, and the distinction between probabilities and odds.

The topic of systematic reviewing and meta-analysis was introduced by Matthias Egger (Bern, Switzerland), working through the example of a patient that Professor Egger saw as a House Surgeon in Burgdorf in 1981, addressing the question of whether the patient, post-MI should be treated with a beta blocker or not. Opinions, individual publications and traditional reviews gave mixed judgements on this question. Had there been access to a good systematic review, the case would have been established for treatment, though they did not have access to this at the time in Burgdorf. The patient did not get a beta blocker. The value of systematic reviews was further put into clinical context by the small group session which addressed a systematic review of a new rapid (rule-out) test for Falciparum malaria, based on immuno-chromatographic assay of the parasite antigen histidine-rich protein-2 (HRP-2). This was written by a team including Danny Pewsner and which was rated very highly by the delegates in critical appraisal. Apparently the positive findings in support of the new test surprised even the authors of the systematic review who were initially imbued with scepticism.

Day Four; Applying the Evidence:

Having gleaned the Evidence, the challenge is to apply it. In an interactive session, Sverre Sandberg (Bergen, Norway) highlighted that "the na�ve assumption that when research information is made available, it is somehow accessed by practitioners, appraised then applied in practice is now largely discredited." Achieving a change in clinician behaviour is a particularly challenging area, especially in Laboratory Medicine. Paul Glasziou indicated seven steps in the path from Research to improved Health Outcomes; awareness, acceptance, applicability, availability and ability, acted on, agreed to and adhered to. Assuming a 80% transfer at each of these steps, this would result in only a 21% patient use (0.87 = 0.21). Chris Price (Bayer, UK) addressed the relationship between testing and outcomes, highlighting the difficulty in designing studies to address hard outcomes such as mortality in the realm of Laboratory Medicine. More commonly, surrogate end-points such as faster decision making, improved compliance and patient satisfaction are reported as end-points in this domain. Wytze Oosterhuis (Tilberg, Netherlands) defined guidelines as "systematically developed evidence-based statements which assist providers, recipients and other stakeholders to make informed decisions about appropriate health interventions". He stressed that they are not substitutes of clinical experience, applicable to all patients, the only solution to a problem nor directives or compulsory rigid protocols.

Integral to applying the evidence is the faculty of being able to critically appraise guidelines. The utility of the AGREE instrument (Appraisal of Guidelines for Research & Evaluation) (www.agreecollaboration.org) in this regard was demonstrated by Professor Horvath (Szeged, Hungary). An example of diabetes guidelines formed the template for small group discussion. Although the guidelines appeared to be superficially adequate, detailed critical appraisal using the AGREE instrument highlighted many shortcomings that were not initially apparent to many.

Conclusion:

This was a valuable Workshop that raised awareness of key skills in Evidence Based (Laboratory) Medicine. The intention is for delegates to promulgate this teaching in cascade fashion in their own Regions ("train the trainers"), which we have started to do by writing this article. We anticipate that more activities focused on EBLM will be promoted through the AACB. The authors acknowledge that EBLM does not have all the answers, but at least it provides a structured tool for setting the question in the first place. Most importantly, the teaching of EBLM needs to be integral to core topics in Laboratory Medicine rather than a stand-alone discipline in its own right.

Some useful EBLM Links:

www.cochrane.org
The Cochrane Library has access to several databases.

www.acpjc.org
The ACP Journal Club cites and critically appraises well-performed and valid studies and also has a focus on diagnostic studies.

www.ifcc.org
Follow the links to the c-EBLM database of systematic reviews in Laboratory Medicine. (About 650 references, with abstracts)

www.pubmed.com
PubMed is free and allows use of validated search filters for selecting papers on diagnosis and other topics.

www.tripdatabase.com
The database provides evidence-based summaries with comments, also on diagnostic issues.

www.cebm.net
The Oxford Centre for Evidence-Based Medicine provides access to a lot of EBM resources including powerpoint presentations and critical appraisal tools.

www.health.usyd.edu.au/step/about/appraisal.htm
Screening and Test Evaluation Program provides a critical appraisal tool.

www.ebandolier.com
A monthly newsletter of evidence distributed in the NHS.

www.agreecollaboration.org
Appraisal of Guidelines for Research and Evaluation provides an instrument for critical appraisal of guidelines and for assisting guideline development.


Paul Glasziou coaches delegates through searching for the evidence in the Computer Laboratory of the Health Insurance Fund Building.



Attendees: Chris Florkowski, Matthias Egger, Rob Christenson, Peter Bunting, Rita Horvath and John Whitfield.

  http://www.labtestsonline.hu/ it au PL DE ES GB USA Labs are Vital cz gr FR CN

 

 
Website developed by Insoft Digital