Open access peer-reviewed chapter

Fact versus Conjecture: Exploring Levels of Evidence in the Context of Patient Safety and Care Quality

Written By

Maryam Saeed, Mamta Swaroop, Daniel Ackerman, Diana Tarone, Jaclyn Rowbotham and Stanislaw P. Stawicki

Submitted: 06 February 2018 Reviewed: 26 March 2018 Published: 05 September 2018

DOI: 10.5772/intechopen.76778

From the Edited Volume

Vignettes in Patient Safety - Volume 3

Edited by Stanislaw P. Stawicki and Michael S. Firstenberg

Chapter metrics overview

1,427 Chapter Downloads

View Full Metrics

Abstract

Evidence-based medicine (EBM) can be defined as the integration of optimized clinical judgment, patient values, and available evidence. It is a philosophical approach to making the best possible clinical decisions for individual patients. Based on objective evaluation and categorization of methodological design and data quality, all existing literature can be organized according to a hierarchy of “evidence quality” that helps determine the applicability and value of scientific findings in terms of clinical implementation and the potential to change existing patterns of practice. In terms of general categorization of scientific impact, randomized controlled trials (RCTs) are placed on top of the hierarchy, followed by systematic reviews of randomized controlled trials (RCTs), quasi-randomized designs, observational studies including retrospective case series, and finally case reports and expert opinion. Each study design is susceptible to certain limitations and biases, highlighting the importance of both clinical and scientific acumen of the interpreting provider. Such approach is critical to determining the value and the applicability of study recommendations in everyday practice. Evidence-based practice (EBP) has become one of the fundamental components of modern medicine and plays an indispensible role in the development (and improvement) of patient care and safety worldwide. Furthermore, organizations that create guidelines and policies for the management of specific conditions, often base the content and strength of their recommendations on the quality of evidence available to expert decision-makers. Therefore, understanding the “state of the science” upon which those recommendations are based will help guide the medical practitioner on “if, when and how” to apply evidence-based guidelines in his or her everyday medical or surgical practice. This chapter focuses on clinically relevant application of levels of scientific evidence (LSE) and the corresponding levels of clinical recommendation (LCR) in the context of care quality and safety.

Keywords

  • evidence-based medicine
  • levels of evidence
  • levels of recommendation
  • meta-analysis
  • randomized controlled trial
  • case-control study
  • cohort study
  • case reports
  • expert opinion
  • medical decision making

1. Introduction

Evidence-based medicine (EBM) is a scientific approach to clinical problems, intended to help clinicians make the best possible decision for their patients, and the “best decision” being defined as one that incorporates the relevant evidence applied through the expertise of a practitioner while preserving patient autonomy and safety [1, 2]. At its core, EBM combines two fundamental principles. First, evidence by itself is never sufficient to make a clinical decision and should be combined with clinical expertise and adapted to each patient’s unique case. Second, practitioners need to be aware how much confidence can be placed in a particular recommendation, thus creating the need for establishing pre-determined levels of scientific evidence (LSE) to help guide the decision-making process [2].

During the past two decades, the introduction of EBM has contributed to a dramatic shift in clinical practice patterns [3, 4, 5]. Wide-scale implementations of EBM principles across institutions formed a foundation for better and more streamlined decision making among physicians, contributing to gradual improvement in both patient safety and quality of care [6, 7]. Perhaps just as importantly, such paradigms led to an increased ability for individuals and systems alike to undergo self-evaluation and self-improvement [8, 9]. As the overall quantity and quality of available clinical scientific evidence increased over time, applications of this knowledge led to enhancements in various clinical processes, directly and indirectly improving the safety record of healthcare institutions that embraced EBM-based models [10, 11].

Any experimental observation suggesting a relationship between two clinical variables constitutes some form of scientific evidence. The “strength” of that evidence is determined by the total number of measurements, the degree of any observed correlation, ability to reproduce results, as well as the methodology used to collect and analyze information [12, 13, 14]. It is important to note that the availability of multiple sources of information in a specific area may allow for cross-correlation of results and greater decisional confidence when making recommendations. It then behooves clinicians to understand both the strength of recommendations, which is inherently variable due to heterogeneous methodological approaches, and the applicability of results to a particular patient which is derived through a deeper understanding of how the evidence was obtained [2, 15]. Based on the quality of study design, estimated level of bias, overall validity, and clinical applicability, standardized definitions of “levels of evidence” were introduced to help reduce errors and to make better, more consistent clinical decisions [4, 16]. Tables 1 and 2 demonstrate commonly utilized levels of scientific evidence and grades of recommendation, respectively [17]. Grades of recommendation (GOR) are discussed in more detail in subsequent sections of this chapter.

LSE Type of supporting scientific data
Ia High-level evidence derived from meta-analyses of RCTs
Ib Scientific evidence obtained from at least one high-quality RCT
IIa Evidence obtained from at least one well-designed, non-randomized CT
IIb Evidence obtained from at least one well-designed, quasi-experimental study
III Evidence based on well-designed observational (e.g., case-control, correlation, or comparative) studies
IV Evidence based on documented opinions of experts, committees of experts, and/or clinical experiences of opinion leaders in a specific topic area

Table 1.

Broadly accepted classification of levels of scientific evidence.

CT, clinical trial; LSE, level of scientific evidence; and RCT, randomized clinical trial.

Grade LSE Corresponding recommendation
A Ia, Ib Grade A recommendations require at least one RCT as part of the overall scientific evidence. In addition, good overall data quality and consistency of results must be present
B IIa, IIb, and III Grade B recommendations require methodologically sound CTs (that are not RCTs) as part of the overall scientific evidence used during the formulation process. Grade B recommendations are based on the most heterogeneous grouping of evidence (IIa, IIb, and III)
C IV Grade C recommendations are usually built upon a careful compilation of expert opinions and/or clinical experiences of respected opinion leaders in a specific topic area. In global terms, Grade C recommendations indicate the absence of high quality clinical studies (e.g., suitable CTs or RCTs)

Table 2.

Grades of recommendation, from highest (A) to lowest (C), are primarily based on the level of available scientific evidence.

CT, clinical trial; LSE, level of scientific evidence; and RCT, randomized clinical trial.

Currently, the best available evidence in any particular clinical area is heavily dependent on the issue being researched, the difficulty of obtaining adequate data (which may be based on the prevalence, incidence, or even our understanding [or lack thereof] of a particular disease), and the type of scientific question being asked (e.g., clinical prognosis, treatment effectiveness, and risk-benefit assessment) [4, 18, 19, 20]. However, when it comes to issues of therapy or treatment, randomized controlled trials (RCT) and systematic reviews of RCTs are generally considered to be the “gold standard” with the highest internal validity and least amount of bias [14]. On the opposite end of the spectrum, non-systematic observations, ideas, and editorial opinions made by individual clinicians are considered to be the weakest form of supporting evidence in the context of formulating subsequent recommendations [3, 21]. The hierarchy of LSE, broken down by the type of research endeavor, is presented in Figure 1 [9, 22].

Figure 1.

Levels of scientific evidence according to different types of study. For each category of research (e.g., experimental, qualitative, outcome, or descriptive), the red arrow indicates the increasing level of scientific evidence, manifested through greater internal, external, and quantitative result validity. Modified from Tomlin and Borgetto [22].

The practice of EBM provides clinicians with a clear, concise course of action, encouraging the formulation of a relevant clinical question, finding and critically assessing the best available evidence, and applying pertinent results into clinical practice with the fundamental goal of improving patient outcomes, safety, and overall quality of care [23, 24, 25]. As outlined in Table 1 and Figure 1, available evidence may range from an RCT to isolated observations or opinions of single individual. While all existing evidence is not considered equal, it is critical to understand that all LSEs are important and have their own intrinsic value that corresponds to their level of clinical relevance and overall impact on patient care [26].

In this chapter, we outline different LSEs and associated study designs, followed by a detailed discussion on implementing clinical research findings in the context of GOR. Finally, we consider adaptation of evidence-based practice to improve both quality of care and patient safety across our health systems.

Advertisement

2. Levels of evidence: the importance of study design

Therapeutically relevant clinical research evidence can be broadly categorized into studies of an observational nature and those that have a structured experimental study design [4, 27]. Experimental studies, which include randomized controlled trials (RCTs) and methodologically sound meta-analyses of RCTs, are positioned at the top of the hierarchy (Figure 1 and Table 1) [3]. Although nomenclature may change across different categories of research (e.g., experimental, qualitative, outcome, or descriptive), the fundamental premise of LSE stratification remains the same—an organized progression from “low to high” along the spectrum of internal/external scientific validity (and repeatability) [28, 29, 30, 31, 32].

Bias in a study design can confound results of an investigation and lead to misrepresentation of the true implications of the intervention/treatment being studied [33]. An RCT is a clinical trial design intended to minimize bias by randomly allocating study participants to two or more interventions or treatment “arms” [14, 34] and often “blinding” patients and investigators from knowing which intervention an individual is receiving. Within this paradigm, each treatment arm may represent a different drug, device, or a procedure. It may also represent different ways of applying or using a process, device, a procedure, or a placebo. By limiting any opportunity for patients, clinicians, or investigators to choose which arm of the trial the participants will be assigned to, RCTs effectively minimize bias through the process of randomizing both known and unknown prognostic variables [4, 18, 35, 36]. The above-mentioned “blinding” process thus allows a “less biased” estimate of the treatment effect that has enabled RCTs to revolutionize medical research, achieving the status of “gold standard” for therapeutic research and holding the top position in the EBM hierarchy of LSEs (Table 1 and Figure 1) [37, 38].

Results from RCTs, although considered the most robust and reliable form of evidence, are not always easily translatable or applicable across diverse clinical settings. Moreover, not every medical decision requires data from an RCT [39]. Implementation of RCT findings may be challenging at a single-institution level, primarily because of procedural, work-flow, and other institution-specific factors [2, 40].

Well-designed observational studies are recognized as level IIa, IIb, or III evidence (Table 1) and generally are easier to conduct than an RCTs, but still provide meaningful clinical evidence [37, 41]. Additionally, observational studies may lay the foundation for the definitive RCT to be conducted. Cohort and case-control studies are the two primary types of observational studies that can demonstrate important associations between exposure and disease [37]. Placed slightly above case-control studies on the LSE hierarchy, cohort studies can be both prospective and retrospective in nature [37, 42]. Prospective cohort studies observe two groups of populations—one group with the risk or prognostic factor of interest and the second group without [9]. These populations (or groups) are followed over a variable period of time to observe the development of a disease or a specific outcome among those with the risk factor and those without. Prospective cohort studies can be tailored to collect data regarding exposure to any specific or rare disease and can be designed to observe multiple outcomes for any given exposure or intervention [37, 43]. Retrospective cohort studies, on the other hand, are historic in nature and look in to the past to analyze disease development within a specific group of subjects based on their known (or declared) exposure status. Retrospective cohort studies are more economical to conduct compared to prospective studies and take a shorter amount of time to complete, although the results from such studies may be incomplete or inaccurate [37, 44, 45]. They may also have advantages in terms of utilization of large national data sets to help analyze and derive relationships that may answer or pose new clinical questions.

In contrast to cohort studies, case-control studies recruit subjects based on the outcome of interest at the outset of the study [46, 47]. Subjects with a specific outcome are categorized as “cases” and subjects without the specific outcome are categorized as “controls” [47]. Retrospective data regarding the presence of exposure to single or multiple risk factors are then collected from both groups, typically by conducting interviews, surveys, or collecting chart data. Based on the collected data, strength of association between disease and exposure may be determined and provided in the form of odds ratio or relative risk [4]. Case-control studies can provide valuable information about rare diseases or those ailments that have a prolonged latency period [4, 37, 44, 45].

Case series, case reports, and expert opinion constitute the lowest quality evidence on the overall hierarchy of LSE, are inherently retrospective in nature, and most often feature no control or comparison groups (or cases) [48]. These reports are usually narrow in scope, describe a single population subgroup, and are often based on the experiences of an individual researcher or a single institution. The above-mentioned factors render data within the latter LSEs less reliable, possibly difficult to reproduce, and often non-generalizable when applied to a larger (or different) population. Such studies, however, can provide useful information on rare diseases or unique presentations and complications associated with particular interventions or procedures [4, 49, 50, 51].

The practice of EBM requires deep and critical analysis of the entire body of available evidence in a specific area, with more fragmentary assessments being considered improper and inadequate [15, 52]. Systematic reviews are a key component of evidence-based health care, and are defined loosely as “secondary analyses” of a large collection of reported results from individual studies for the purpose of integrating the overall findings [53, 54]. Systematic reviews essentially use data from individual studies (most often RCTs) and “pool” these data together to draw a more robust conclusion regarding the effect of the intervention being researched on specific clinical outcome(s) [4, 19, 55]. The primary aim of systematic reviews is to determine whether an effect exists and if that effect is negative or positive in relation to a specific clinical approach or intervention vis-à-vis a pre-defined outcome [54]. By “pooling” data and results from multiple studies, well-designed systematic reviews can answer questions that cannot be sufficiently answered by any individual study [56]. In addition, this approach clearly demonstrates any discrepancies between apparently conflicting studies. Finally, systematic reviews can also be used to generate new hypotheses [54, 57].

Having described the different levels of evidence, it is important to note that the LSE hierarchy is not “set in stone” and a number of factors determine the validity and strength of any particular research study and consequently the evidence. Key elements within study methodology, such as patient inclusion or exclusion criteria, play a critical role not only in determining the level of evidence attributable to any particular finding but also the applicability and translatability of study results to any particular patient or institutional setting. The recognition of inherent biases based on the study setting, financing source(s), and the appropriateness of the statistical analysis plan is important when determining the validity of results. Subsequent sections of our chapter will provide a practical discussion on the practical application of LSEs in the clinical arena, focusing specifically on patient safety and quality of care as well as the role of different grades of recommendations (GOR’s) in understanding the implementation of evidence in a particular setting or situation.

Advertisement

3. Levels of scientific evidence: clinical applications and examples

In order to better understand how LSEs are relevant to GORs and EBM, some practical clinical examples are provided below to help clarify these important scientific relationships and associations. Further discussion of GORs and implementation paradigms for clinical scientific evidence (e.g., 5A’s, P-D-C-A, Figures 2 and 3, respectively) will then follow, with focus on fostering organizational excellence and a culture of safety [58, 59, 60].

Figure 2.

Schematic representation of the PDCA (Plan-Do-Check-Act) cycle. Each iteration of the cycle involves a number of procedural checkpoints, with specific sets of associated tasks and critical questions.

Figure 3.

The evidence-based medicine cycle begins with Assessment (e.g., determination of need for a new cycle/process). This is followed by Asking pertinent questions (e.g., reasonably answered and searchable issue) and Acquisition of data (e.g., existing literature and targeted de novo gathering of information). The next step is the Appraisal (e.g., critical evaluation of all available data in the context of the primary question and the quality/levels of evidence), and finally, Application of the newly synthesized evidence into existing institutional/patient care matrix. Based on the overall outcome of the currently completed cycle, as well as the institutional needs and areas of focus, the determination of “if/when” to begin next cycle is made [143, 148].

Our discussion will begin with a relatively recent account of clinical investigations into a hypothesized association between silicone breast implants and lymphoma [18, 61, 62, 63, 64]. Given the growing number of anecdotal case reports regarding observations of lymphoma following silicone breast implantation, several retrospective cohort studies with large numbers of subjects were conducted, including many years of follow-up data [18, 65, 66, 67]. An association was reported in some studies, but no statistically significant conclusion could be drawn, suggesting that in order to demonstrate any linkage between silicone breast implants and lymphoma, a greater LSE will be required. When a high-quality systematic review was performed by combining data from all retrospective cohorts, no significant association was shown between silicone breast implants and the development of lymphoma [63]. This particular story highlights the importance of LSEs and the potential for patient harm (economic, physical, and psychological) when available data are insufficient to make specific clinical management recommendation(s) [68, 69]. At the same time, one might also make an argument that further research is required to increase the certainty of the relationship between variables under scrutiny, but this approach may not be feasible for very rare conditions or occurrences due to various ethical, patient safety, and statistical considerations [18].

Another example where ethical, financial, and patient safety considerations preclude the conduct of any prospective, randomized research is the area of retained surgical items (RSI) [56, 70]. The retention of surgical instruments is an extremely rare complication, and thus, any study of methods to prevent this dreaded occurrence would need to be prohibitively large to have the power to show a statistically significant advantage of any particular approach over another. At the same time, justification for prospectively comparing specific interventions or the differential application of protocols/procedures related to RSI risk is ethically questionable at best. Consequently, a meta-analytic study of all existing case-control reports on the topic of RSI was performed, effectively demonstrating that pooled data from three source studies identified potential risk factors for RSI that were not apparent from each individual study [56]. While source reports individually suggested that between 3 and 6 variables may be associated with greater incidence of RSI [70, 71, 72], the combined report showed that 7 of 11 potential risk factors were significantly associated with elevated odds for RSI [56]. The above exercise in knowledge synthesis shows that carefully implemented meta-analytic approaches can result in better understanding of an important area of patient safety.

Moving to a different patient safety topic, case-based experiences from the 1950s led physicians to avoid epinephrine injections during hand/finger procedures due to concerns for ischemic complications [18, 73]. Despite the absence of higher level of evidence, avoidance of digital epinephrine injections was widely practiced and taught during that time. Eventually, a comprehensive review of literature between the years 1880 and 2000 was performed, highlighting 48 cases of digital infarction, 21 of which involved epinephrine injections [73]. Subsequent to that, a number of cohort studies were published, reporting no significant association between digit ischemia and local epinephrine injections [74, 75, 76]. Based on the conclusions drawn from studies with higher LSE, the original hypothesis was rejected [18]. This example demonstrates how observational and case studies may be inherently biased and that higher levels of scientific evidence must be available before making any definitive conclusions, accepting evidence as fact, and implementing evidence-based recommendations [18].

In contrast, even well-conducted RCTs are sometimes unsuccessful in swaying medical practice. The University Group Diabetes Program trial, a methodically sound RCT conducted in the late 1960s found lack of efficacy of an anti-diabetic drug tolbutamide compared to diet alone in prolonging life. Furthermore, the study suggested that tolbutamide is less effective than diet alone or diet with insulin as a modulator of cardiovascular mortality [77, 78]. Despite relatively high LSE presented in the study, tolbutamide prescriptions increased, as debate over the trial’s interpretation continued for more than a decade [78, 79, 80]. Similarly, the Antihypertensive and Lipid-Lowering treatment to prevent Heart Attack Trial (ALLHAT) showed that thiazide diuretics were as effective as modern (and much more expensive) calcium-channel blockers and angiotensin-converting-enzyme inhibitors in treating hypertension [81]. These finding were questioned by pharmaceutical companies, and after an initial resurgence of thiazide prescriptions following the trial’s publication [82], the sales of newer antihypertensive agents increased [38, 83, 84, 85].

All of the above examples show that no single study can provide definitive answers or understanding of therapeutic response, diagnostic test efficacy, or disease-specific risk factors. The struggle continues between the forces of clinical habit, third-party interests, and objective evidence. Policy-makers, opinion leaders, and providers must embrace both open-mindedness and the value of unbiased research in guiding EBM and evidence-based recommendations [86, 87, 88]. Likewise, all healthcare providers must be well versed in both the definitions and the application of the concepts of LSE, GOR, and EBM and must recognize that there are multiple factors at play when deciding which evidence is best and how to apply this evidence [87, 88, 89]. It has been proposed that misapplication of clinical scientific evidence may be one of the key barriers to sustainable improvement in healthcare quality and safety in a highly complex system with increasingly constrained resources [87, 89, 90].

Advertisement

4. Important limitations

Recommendations from various expert groups are based on different LSEs, ranging from randomized controlled trials to so-called expert opinions, and all come with their own set of limitations that should be considered when transforming research findings into clinical practice. After defining and discussing important aspects pertaining to different LSEs, we will now touch upon some of the pitfalls associated with implementing and following EBM in every day practice.

Introduced as an effort to reduce bias and improve the accuracy of evidence, RCTs have expanded medical knowledge and transformed clinical practice [3]. While RCTs are considered to provide the most internally valid evidence, not all RCTs are methodologically sound and often offer only partial answers. In their “Evidence Based Medicine Manifesto for Better Healthcare,” Heneghan et al. [91] state that “too many research studies are poorly designed or executed. Too much of the resulting research evidence is withheld or disseminated piecemeal. As the volume of clinical research activity has grown the quality of evidence has often worsened, which has compromised the ability of all health professionals to provide affordable, effective, high value care for patients” [91]. In addition, RCTs are very challenging to execute, are costly, and have long latency periods. This may have important implications during study design, especially when establishing appropriate inclusion criteria or standardizing experimental interventions [3, 4, 18, 38]. Limitations and challenges associated with RCTs have forced physicians to look into alternate study designs that are easier to conduct, take less time to complete, are less expensive, and yield similar results to RCTs [2].

Perhaps the most commonly employed tool that allows researchers quickly and effectively leverage the wealth of existing evidence from various RCTs is meta-analysis [88, 92, 93]. Having said that, systematic reviews including meta-analyses can generate secondary evidence that is only as good as the cumulative evidence provided by primary source studies [15, 52]. Therefore, the validity of evidence from systematic reviews is largely based on the RCTs included, and meta-analyses cannot ameliorate any biases present in source studies [15]. Moreover, systematic reviews and meta-analyses rely solely on published data and evidence, some of which may be published in obscure journals and not easily accessible. In addition, some of the reported data may be limited in scope, with heterogeneous reporting of outcome parameters. This phenomenon is called publication bias, and in order to minimize such a bias, researchers are advised to search literature thoroughly and methodically as well as maintain contact with both study authors/investigators and other experts in the field [15].

Observational studies, including case-control and cohort designs, come with their own set of limitations and biases [94, 95]. Case-control studies draw a comparison between individuals with a condition or disease (cases) and those individuals in whom the condition or disease is absent (controls), optimally in a fixed ratio of cases and controls (e.g., 1:2, 1:3, or 1:4) [14]. Since both groups are compared with respect to their past and present exposures, most of the information provided relies on recall and may end up being incomplete or even untrue [47]. In addition, validation of the collected information may be extremely difficult or not feasible, and a detailed study on the mechanism of the researched disease is rarely possible. On the other hand, cohort studies select a group of individuals with certain characteristics and follow them over a long period of time for the development of a particular disease or outcome of choice [96]. Since cohort studies are usually conducted over extended periods, key challenges include high study costs and ensuring adequate follow-up over a long period of time. Moreover, a sizable group of subjects is required to adequately investigate a rare disease and control of peripheral variables may be incomplete, resulting in increased bias [4, 37, 44, 45]. Finally, it is difficult to accurately account for changes in medical treatment over time, resulting in the emergence of “temporal bias”.

Unsystematic personal observations, prior to the introduction of EBM, have carried great weight in shaping both medical education and practice [15]. We now have a much better appreciation of how these observations may be inherently biased and how much progress was forfeited by perpetuating a system of subjective opinions in our current era of less biased, objective scientific investigation [4]. Although the different limitations of various LSEs discussed above may seem considerable, one must remember that they are dwarfed by the potential harm resulting from unrestricted, non-evidenced practice of yesterday. As long as practitioners and champions of healthcare quality and safety use a healthy degree of informed caution when interpreting published evidence and clinical data, continued progress can be made toward a better and safer, evidence-based medicine of tomorrow [2, 8].

Advertisement

5. Evidence-based practice: focus on quality and safety

The practice of EBM is essential for making safe and effective clinical decisions and is also crucial to promoting quality improvement and ensuring continuous focus on patient safety in healthcare organizations [10, 25, 97]. Research is the foundation of the practice of EBM. It helps drive enhanced health outcomes, promotes standardized approaches to care, and facilitates cost reduction in a resource-limited healthcare system [98, 99, 100, 101]. Evidence for beneficial effects of EBM continues to accumulate in a diverse number of allied health and medical areas of specialty, including surgery, critical care, primary care and preventive medicine, internal medicine and subspecialties, obstetrics and gynecology, as well as nursing, hospital administration, health information technology, quality, and patient safety [102, 103, 104, 105, 106]. EBM can also be formulated from patient-reported outcomes using established clinical processes such as The Joint Commission Core Measures [107]. In addition, the Agency for Healthcare Research and Quality (AHRQ) developed a series of quality indicators designed to standardize evidence-based care medicine for preventing in-hospital complications that may result in penalties under the auspices of value-based purchasing program [108]. Often, performance in standardized quality indicators can be used to benchmark quality and safety performance in various patient populations [108]. Preoperative prophylactic antibiotics, bowel preparation, and deep vein thrombosis prophylaxis are examples of evidence-based best practices that have been defined and protocolized by organizations and initiatives like Centers for Medicare & Medicaid Services (CMS) and the Surgical Care Improvement Project (SCIP) [109]. Similarly, checklists have revolutionized healthcare across increasing number of settings, as documented by multiple studies demonstrating lower mortality, postoperative complication rates, and enhanced adherence to patient safety procedures [110, 111, 112, 113, 114, 115].

Patient safety research focuses on the identification of safety issues (e.g., patient safety gaps) and their subsequent remediation through the study and implementation of new practices and policies [113, 116]. Despite ample descriptive evidence, the implementation of safety practices remains an underresearched subject, with much work remaining before achieving “zero incidence” goals across many adverse event types [9, 117]. Perhaps more troubling is the observation that the gap between research findings and implementation across various clinical settings may indeed be widening [102]. There is an estimated lag time of approximately 17 years from research to implementation in clinical practice [118, 119]. It stands to reason that a better process is required for this much needed translational process to occur more efficiently. For example, since the mid-1800s, the importance of hand hygiene has been a widely accepted fact, as numerous studies have confirmed the significant benefit of this practice. Despite the presence of widespread awareness and institutional guidelines, compliance among healthcare workers and doctors in particular remains low [120, 121]. Dissemination and application of evidence-based safety practices is often met with multiple obstacles and/or outright resistance, both at the individual and organizational levels [8, 106]. In one systematic review of 23 studies of stand-alone teaching of EBM principles in a postgraduate education setting, it was noted that although knowledge increased, behaviors, attitudes, and skills did not change; and a system of interactive teaching strategies was recommended [122]. Development of effective policies based on carefully vetted research evidence constitutes another major barrier to the actual implementation of evidence into practice, especially within organizations where expert opinion and hierarchical decision-making impose “glass ceilings” toward evidence-based approaches. Moreover, numerous methodological and ethical complexities make research in clinical safety particularly challenging, as patients cannot be subjected to blinding or randomization [102].

It is important to reiterate that EBM is not purely about conducting RCTs and implementing their context-appropriate results into clinical practice. Evidence-based medicine extends to critical decision-making regarding treatment and practices that stem from carefully and thoughtfully considering and weighing “best evidence” [123, 124, 125]. Well-designed case-control and cohort studies can prove to be equally effective tools and should be considered for areas where RCTs are simply not feasible or impractical. Lastly, it is every practitioner’s obligation to provide the best available care for their patients and that will continue to be driven by the increasing wealth of available literature [126], hopefully characterized by better LSEs and overall quality of both methodology and data. Practitioners and champions of patient safety must therefore be encouraged to thoroughly search and evaluate published research and thoughtfully consider “best evidence” in an unbiased, holistic manner before committing to any clinical decisions or programmatic implementations.

Clinical pathways and guidelines are used by practitioners to provide a framework of care for specific patient populations to improve outcomes [107]. Clinical guidelines are evidence-based care recommendations for defined populations and assist the clinician in decision-making regarding the patient care plan. Clinical pathways are used to implement the guidelines into practice and represent what has been determined to be the best evidence-based care for most patients [127]. They are typically a written tool and may be facility specific with an overarching goal of minimizing variability and optimizing outcomes. Rotter et al. [128] reviewed 27 studies involving 11,398 participants. Twenty of those studies compared clinical pathways with usual care. Their review identified a reduction in complications and improved documentation. Most studies also reported significant reductions in patient length of stay and thus a favorable impact on associated costs [128].

Advertisement

6. Grades of recommendation

It has long been known that clinical practices based on scientific evidence can only be “as good as” the underlying evidence and judgments [124]. Parallel to the assessment of LSE discussed in previous sections of this manuscript, the need arose for the ability to grade the corresponding recommendations—a necessary step for reconciliation of all of the components of, and internal consistency of, EMB practices [124]. Grading of recommendations has been pioneered by the Scottish Intercollegiate Guidelines Network (SIGN), with subsequent worldwide embrace and adoption of this powerful healthcare quality improvement paradigm [129, 130]. As outlined in Table 2, recommendations are graded on a scale from A (highest) to C (lowest), with the overall goal of careful consideration and weighing of objective and subjective components of both the available evidence and its corresponding interpretation. It is important to note that different other GOR paradigms have been devised, with the topic being so vast as to warrant its own dedicated chapter and/or book [124]. Finally, another matter that is beyond the scope of the current discussion is the advent of various reporting requirements for different types of studies. The reader is referred to external resources for additional information on this important and increasingly complex subject [131, 132, 133, 134].

Another important development in the area of translating evidence into practice was the introduction of the GRADE (Grading of Recommendations Assessment, Development, and Evaluation) approach [135, 136]. In the GRADE paradigm, evidence is assessed in terms of both its certainty (e.g., quality) and strength of the corresponding clinical recommendation(s) [135, 137]. In terms of practical applicability of the GRADE system, quality of evidence and the corresponding definitions are provided in Table 3 [138]. A multi-tiered system, examining specific evidence-related factors and criteria in the context of their influence on the direction and strength of the recommendation, is then employed to help with clinical implementations and translations of research data [139]. Since its introduction, the GRADE paradigm provides a well-organized and objectivized framework for evaluating the relative importance of research outcomes and alternative clinical approaches, and summarizing evidence for systematic reviews and clinical practice guidelines [139].

High High level of confidence regarding the true effect being close to that of the estimate of the effect
Moderate Moderate level of confidence regarding the effect estimate. In other words, the true effect is likely to approximate the estimate of the effect, but non-trivial possibility exists of a “substantial difference”
Low There is low overall confidence that the effect estimate reflects the true effect. In other words, the true (actual) effect may be substantially different from the estimated effect
Very low There is very little confidence that the effect estimate reflects the true effect. In other words, the true effect is likely to be substantially different from the estimated effect

Table 3.

Quality of evidence assessment definitions, as utilized in the GRADE approach [138].

Advertisement

7. Synthesis: putting evidence to work, one improvement cycle at a time

The entirety of our previous discussion revolved around the levels of scientific evidence, various aspects of their interpretation and implementation, as well as grades of recommendations outlined in the overall context of EBM-based discussion. At this point, it will be important for the reader to become familiar with some of the methodologies employed in healthcare quality and patient safety improvement efforts. It is critical to emphasize that these approaches not only rely on EBM for planning and assessment but also help modify our existing EBM patterns through a continuous process improvement cycle. While evidence-based medicine has focused on providing the most recent evidence-based care for patients, quality improvement has focused more on the way we provide that care [140]. The evidence must be reviewed to ensure that it is indeed the right care while there also needs to be a clinical improvement process to implement the change or evidence-based care. The two most common formats used in the areas of healthcare quality improvement and patient safety are the PDCA (or Plan-Do-Check-Act, Figure 2) and the 5A’s (Assess-Ask-Acquire-Appraise-Apply, Figure 3) methodologies [141, 142, 143, 144, 145, 146, 147]. The goal of these performance improvement approaches is to achieve the desired results and continue on to another part of the process [107].

Advertisement

8. Conclusion

Evidence-based medicine continues to evolve into a practical way of integrating feedback from process outcomes and research results into clinical practice, assisting practitioners globally in providing optimal care for their patients. Understanding the different levels of evidence and the strength of recommendations is an integral component of EBM and helps guide decision making, but must consistently be interpreted in the context of sound clinical judgment and a strong therapeutic relationship with our patients. Champions of patient safety and care quality should be familiar with and comfortable in the application of the above concepts in their everyday practice. In addition, excellent knowledge of established standards for reporting evidence, as well as key methodologies used in the process of guideline implementation, will help guide clinicians toward providing the highest quality, safest possible care to their patients. It is crucial to understand that no single study should be accepted as “fact” nor should any study be disregarded based purely on its LSE. Instead, deliberate efforts should be made to critically analyze recommendations and apply them judiciously, after careful consideration of all available evidence has been made in the context of each specific clinical situation and setting. It is essential that healthcare institutions undergo a cultural transformation to ensure that evidence-based safety practices are introduced, effectively implemented, and allowed to achieve their full potential and intended impact [101].

References

  1. 1. Sackett DL et al. Evidence based medicine: What it is and what it isn’t. BMJ. 1996;312:71-72
  2. 2. Graham AJ, Grondin SC. Evidence-based medicine: Levels of evidence and grades of recommendation. In: Difficult Decisions in Thoracic Surgery. London: Springer; 2007
  3. 3. Manchikanti L, Hirsch JA, Smith HS. Evidence-based medicine, systematic reviews, and guidelines in interventional pain management: Part 2: Randomized controlled trials. Pain Physician. 2008;11(6):717-773
  4. 4. Petrisor B, Bhandari M. The hierarchy of evidence: Levels and grades of recommendation. Indian Journal of Orthopaedics. 2007;41(1):11
  5. 5. Haynes RB et al. Transferring evidence from research into practice: 1. The role of clinical care research evidence in clinical decisions. ACP Journal Club. 1996;125:14-16
  6. 6. Ostrom AL et al. Moving forward and making a difference: Research priorities for the science of service. Journal of Service Research. 2010;13(1):4-36
  7. 7. Harrigan M, ARCHIVED-Quest for Quality in Canadian Health Care: Continuous Quality Improvement [Health Canada, 2001]. Available at: https://www.canada.ca/content/dam/hc-sc/migration/hc-sc/hcs-sss/alt_formats/hpb-dgps/pdf/pubs/2000-qual/quest-quete-eng.pdf [Last access date: Apr 29, 2018]
  8. 8. Tolentino JC et al. Introductory chapter: Developing patient safety champions. In: Vignettes in Patient Safety. Vol. 2. Rijeka, Croatia: InTech; 2018
  9. 9. Stawicki S et al. Fundamentals of Patient Safety in Medicine and Surgery. New Delhi: Wolters Kluwer Health (India) Pvt Ltd; 2014
  10. 10. Leape LL, Berwick DM, Bates DW. What practices will most improve safety?: Evidence-based medicine meets patient safety. JAMA. 2002;288(4):501-507
  11. 11. Tolentino JC et al. Introductory chapter: Developing patient safety champions. In: Vignettes in Patient Safety. Vol. 2. Rijeka, Croatia: InTech; 2018
  12. 12. Polit DF, Beck CT. Essentials of Nursing Research: Appraising Evidence for Nursing Practice. Ambler, Pennsylvania: Lippincott Williams & Wilkins; 2010
  13. 13. Maxwell SE, Delaney HD. Designing Experiments and Analyzing Data: A Model Comparison Perspective. Vol. 1. New York: Psychology Press; 2004
  14. 14. Hon HH, Stoltzfus JC, Stawicki SP. Biostatistics for the intensivist: A clinically oriented guide to research analysis and interpretation. In: Principles of Adult Surgical Critical Care. Cham, Switzerland: Springer; 2016. pp. 453-463
  15. 15. Elamin MB, Montori VM. The hierarchy of evidence: from unsystematic clinical observations to systematic reviews. In: Burneo JG, editor. Neurology: An Evidence-based Approach. New York: Springer Science; 2012
  16. 16. University, W.S. Evidence based practice Toolkit. Available from: http://libguides.winona.edu/ebptoolkit
  17. 17. Web_Resource. Tables of levels of scientific evidence and grades of recommendation. 2010. Available from: http://www.guiasalud.es/egpc/traduccion/ingles/esquizofrenia/completa/documentos/anexos/anexo1.pdf [January 18, 2018]
  18. 18. Burns PB, Rohrich RJ, Chung KC. The levels of evidence and their role in evidence-based medicine. Plastic and Reconstructive Surgery. 2011;128(1):305-310
  19. 19. Straus S, et al. Evidence Based Medicine: How to Practice and Teach EBM. New York: Churchill Livingstone; 1997
  20. 20. Alansari MA, Hijazi MH, Maghrabi KA. Making a difference in eye care of the critically ill patients. Journal of Intensive Care Medicine. 2015;30(6):311-317
  21. 21. Guyatt G, Rennie D. Part 1. The basics: Using the medical literature. Introduction: The philosophy of evidence-based medicine. In: Users’ Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. Chicago: American Medical Association; 2002. pp. 3-12
  22. 22. Tomlin G, Borgetto B. Research pyramid: A new evidence-based practice model for occupational therapy. American Journal of Occupational Therapy. 2011;65(2):189-196
  23. 23. Guyatt G et al. Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. 3rd ed. New York, NY: McGraw-Hill Education; 2015
  24. 24. Steves R, Hootman JM. Evidence-based medicine: What is it and how does it apply to athletic training? Journal of Athletic Training. 2004;39(1):83-87
  25. 25. Shojania KG et al. Making health care safer: A critical analysis of patient safety practices. Evidence Report/Technology Assessment (Summary). 2001;43(1):668
  26. 26. Ramírez F et al. The neglected eye: Ophthalmological issues in the intensive care unit. Critical Care and Shock. 2008;11(3):72-82
  27. 27. Atkins D et al. Grading quality of evidence and strength of recommendations. BMJ. 2004;328:1490-1494
  28. 28. Glasgow RE et al. External validity: We need to do more. Annals of Behavioral Medicine. 2006;31(2):105-108
  29. 29. Glasgow RE et al. An evidence integration triangle for aligning science with policy and practice. American Journal of Preventive Medicine. 2012;42(6):646-654
  30. 30. Knottnerus JA, Dinant GJ. Medicine based evidence, a prerequisite for evidence based medicine. BMJ: British Medical Journal. 1997;315(7116):1109
  31. 31. Robey RR. A five-phase model for clinical-outcome research. Journal of Communication Disorders. 2004;37(5):401-411
  32. 32. Giacomini MK, Cook DJ, E.-B.M.W. Group. Users' guides to the medical literature: XXIII. Qualitative research in health care A. Are the results of the study valid? JAMA. 2000;284(3):357-362
  33. 33. Bhandari M et al. Hierarchy of evidence: Differences in results between non-randomized studies and randomized trials in patients with femoral neck fractures. Archives of Orthopaedic and Trauma Surgery. 2004;124:10-16
  34. 34. Akobeng A. Understanding randomised controlled trials. Archives of Disease in Childhood. 2005;90(8):840-844
  35. 35. Schulz KF, DA G. Generation of allocation sequences in randomised trials: Chance, not choice. Lancet. 2002;359:515-519
  36. 36. Thoma A et al. Evidence-based surgery working group. Users' guide to the surgical literature. How to assess a randomized controlled trial in surgery. Canadian Journal of Surgery. 2004;47:200-208
  37. 37. Song JW, Chung KC. Observational studies: Cohort and case-control studies. Plastic and Reconstructive Surgery. 2010;126(6):2234-2242
  38. 38. Bothwell LE et al. Assessing the gold standard-lessons from the history of RCTs. The New England Journal of Medicine. 2016;374:2175-2181
  39. 39. Olsen L, Saunders RS, McGinnis JM. Clinical Research, Patient Care, and Learning that Is Real-Time and Continuous. Washington (DC): National Academies Press (US); 2011
  40. 40. Hawkins C. A System Framework for Evidence Based Implementations in a Health Care Organization. University of Southern California; Available at: https://search.proquest.com/openview/8ac802f36e1ff2961ff66caf8fab301b/1?pq-origsite=gscholar&cbl=18750&diss=y [Last access date: April 29, 2018]
  41. 41. Concato J, Shah N, RI H. Randomized, controlled trials, observational studies and the hierarchy of research designs. New England Journal of Medicine. 2000;342:1887-1892
  42. 42. Mann C. Observational research methods. Research design II: Cohort, cross sectional, and case-control studies. Emergency Medicine Journal. 2003;20(1):54-60
  43. 43. Silman AJ, Macfarlane GJ. Epidemiological Studies: A Practical Guide. Cambridge, UK: Cambridge University Press; 2002
  44. 44. Jepsen P et al. Interpretation of observational studies. BMJ. 2004;90(8):956-960
  45. 45. Schlesselman JJ. Case-Control Studies Design, Conduct, Analysis. New York: Oxford University Press; 1982
  46. 46. Ray WA. Evaluating medication effects outside of clinical trials: New-user designs. American Journal of Epidemiology. 2003;158(9):915-920
  47. 47. Schlesselman JJ. Case-control Studies: Design, Conduct, Analysis. New York: Oxford University Press; 1982
  48. 48. Barbier O, Hoogmartens M. Evidence-based medicine in orthopaedics. Acta Orthopaedica Belgica. 2004;70(2):91-97
  49. 49. Phillips B et al. Levels of evidence and grades of recommendation. In: Oxford-Centre for Evidence Based Medicine. Available online at: https://www.cebm.net/2009/06/oxford-centre-evidence-based-medicine-levels-evidence-march-2009/ [Last access date April 29, 2018]
  50. 50. Bhoot N et al. Collected case reports versus collected case series: Are they equivalent? Southern Medical Journal. 2007;100:1176
  51. 51. Styskel B et al. Retained surgical items: Building on cumulative experience. International Journal of Academic Medicine. 2016;2(1):5-21
  52. 52. Guyatt G et al. User’s Guide to the Medical Literature: A Manual for Evidence-based Clinical Practice. 2nd ed. New York: McGraw-Hill/AMA; 2002
  53. 53. Glass GV. Primary, secondary and meta-analysis of research. Educational Researcher. 1976;5:3-8
  54. 54. Haidich AB. Meta-analysis in medical research. Hippokratia. 2010;14(1):29-37
  55. 55. Sackett DL et al. Clinical Epidemiology: A Basic Science for Clinical Medicine. Boston, MA: B.A. Company; 1991
  56. 56. Moffatt-Bruce SD et al. Risk factors for retained surgical items: A meta-analysis and proposed risk stratification system. Journal of Surgical Research. 2014;190(2):429-436
  57. 57. Barry N et al. An exploratory, hypothesis-generating, meta-analytic study of damage control resuscitation in acute hemorrhagic shock: Examining the behavior of patient morbidity and mortality in the context of plasma-to-packed red blood cell ratios. International Journal of Academic Medicine. 2016;2(2):159
  58. 58. Seidl KL, Newhouse RP. The intersection of evidence-based practice with 5 quality improvement methodologies. Journal of Nursing Administration. 2012;42(6):299-304
  59. 59. Taylor MJ et al. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare. BMJ Quality & Safety. 2013. DOI: 10.1136/bmjqs-2013-001862. http://qualitysafety.bmj.com/content/early/2013/09/11/bmjqs-2013-001862.citation-tools
  60. 60. Schultz JR. To improve performance, replace annual assessment with ongoing feedback. Global Business and Organizational Excellence. 2015;34(5):13-20
  61. 61. Newman MK et al. Primary breast lymphoma in a patient with silicone breast implants: A case report and review of the literature. Journal of Plastic, Reconstructive & Aesthetic Surgery. 2008;61:822-825
  62. 62. Gaudet G et al. Breast lymphoma associated with breast implants: Two case-reports and a review of the literature. Leukemia & Lymphoma. 2002;43:115-119
  63. 63. Lipworth L, Tarone RE, JK ML. Breast implants and lymphoma risk: A review of the epidemiologic evidence through 2008. Plastic and Reconstructive Surgery. 2009;123:790-793
  64. 64. Duvic M et al. Cutaneous T-cell lymphoma in association with silicone breast implants. Journal of the American Academy of Dermatology. 1995;32:939-942
  65. 65. Lipworth L et al. Cancer among Scandinavian women with cosmetic breast implants: A pooled long-term follow-up study. International Journal of Cancer. 2009;124:490-493
  66. 66. Deapen DM, Hirsch EM, Brody GS. Cancer risk among Los Angeles women with cosmetic breast implants. Plastic and Reconstructive Surgery. 2007;119:1987-1992
  67. 67. Brisson J et al. Cancer incidence in a cohort of Ontario and Quebec women having bilateral breast augmentation. International Journal of Cancer. 2006;118:2854-2862
  68. 68. Greenhalgh T et al. Six ‘biases’ against patients and carers in evidence-based medicine. BMC Medicine. 2015;13(1):200
  69. 69. DiCenso A, Guyatt G, Ciliska D. Evidence-Based Nursing-E-Book: A Guide to Clinical Practice. St. Louis, MO: Elsevier Health Sciences; 2014
  70. 70. Stawicki SP et al. Retained surgical items: A problem yet to be solved. Journal of the American College of Surgeons. 2013;216(1):15-22
  71. 71. Lincourt AE et al. Retained foreign bodies after surgery. Journal of Surgical Research. 2007;138(2):170-174
  72. 72. Gawande AA et al. Risk factors for retained instruments and sponges after surgery. New England Journal of Medicine. 2003;348(3):229-235
  73. 73. Denkler K. A comprehensive review of epinephrine in the finger: To do or not to do. Plastic and Reconstructive Surgery. 2001;108:114-124
  74. 74. Lalonde D et al. A multicenter prospective study of 3,110 consecutive cases of elective epinephrine use in the fingers and hand: The Dalhousie Project clinical phase. Journal of Hand Surgery American Society for Surgery of the Hand. 2005;30:1061-1067
  75. 75. Chowdhry S et al. Do not use epinephrine in digital blocks: Myth or truth? Part II. A retrospective review of 1111 cases. Plastic and Reconstructive Surgery. 2010;126:2031-2034
  76. 76. Wilhelmi BJ et al. Do not use epinephrine in digital blocks: Myth or truth? Plastic and Reconstructive Surgery. 2001;107:393-397
  77. 77. Klimt C. A study of the effects of hypoglycemic agents on vascular complications in patients with adult-onset diabetes. Diabetes. 1970;19:77-815
  78. 78. Cornfield J. The University Group Diabetes Program: A further statistical analysis of the mortality findings. JAMA. 1971;217(12):1676-1687
  79. 79. Seltzer HS. A summary of criticisms of the findings and conclusions of the University Group Diabetes Program (UGDP). Diabetes. 1972;21(9):976-979
  80. 80. Schwartz TB. The tolbutamide controversy: A personal perspective. Annals of Internal Medicine. 1971;75(2):303-306
  81. 81. Group ACR. Major outcomes in high-risk hypertensive patients randomized to angiotensin-converting enzyme inhibitor or calcium channel blocker vs diuretic: The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). JAMA. 2002;288(23):2981-2997
  82. 82. Austin PC et al. Changes in prescribing patterns following publication of the ALLHAT trial. JAMA. 2004;291(1):44-45
  83. 83. Marks HM. The Progress of Experiment: Science and Therapeutic Reform in the United States, 1900–1990. Cambridge: U.K.C.U.P; 1997
  84. 84. Greene J. Prescribing by Numbers: Drugs and the Definition of Disease. Baltimore: Johns Hopkins University Press; 2007
  85. 85. Pollack A. The Minimal Impact of a Big Hypertension Study. New York Times; 2008
  86. 86. Crossley M. Infected judgment: Legal responses to physician bias. Villanova Law Review. 2003;48:195
  87. 87. Hack LM, Gwyer J. Evidence into Practice: Integrating Judgment, Values and Research. Philadelphia: F.A. Davis Co; 2013
  88. 88. Kendrick DC et al. Crossing the evidence chasm: Building evidence bridges from process changes to clinical outcomes. Journal of the American Medical Informatics Association. 2007;14(3):329-339
  89. 89. Wan TT, Connell AM. Monitoring the Quality of Health Care: Issues and Scientific Approaches. New York: Springer Science & Business Media; 2012
  90. 90. Kissick WL. Medicine's Dilemmas: Infinite Needs Versus Finite Resources. New Haven, Connecticut: Yale University Press; 1994
  91. 91. Heneghan C et al. Evidence Based Medicine Manifesto for Better Healthcare. British Medical Journal. 2017;357:j2973
  92. 92. Wolf FM. Lessons to be learned from evidence-based medicine: Practice and promise of evidence-based medicine and evidence-based education. Medical Teacher. 2000;22(3):251-259
  93. 93. Riley WT. A new era of clinical research methods in a data-rich environment. In: Oncology Informatics. San Diego, CA: Elsevier; 2016. pp. 343-355
  94. 94. Vandenbroucke JP et al. Strengthening the reporting of observational studies in epidemiology (STROBE): Explanation and elaboration. PLoS Medicine. 2007;4(10):e297
  95. 95. Schneeweiss S, Avorn J. A review of uses of health care utilization databases for epidemiologic research on therapeutics. Journal of Clinical Epidemiology. 2005;58(4):323-337
  96. 96. Fletcher RH, Fletcher SW, Fletcher GS. Clinical Epidemiology: The Essentials. Baltimore, MD: Lippincott Williams & Wilkins.; 2012
  97. 97. Clinical Key. Applying Evidence-Based Practice to Improve Quality. Elsevier; Available online at: https://www.clinicalkey.com/info/blog/applying-evidence-based-practice-improve-quality/ [Last access date: April 29, 2018]
  98. 98. Fineout-Overholt E, Melnyk B, Schultz A. Transforming health care from the inside out: Advancing evidence-based practice in the 21st century. Journal of Professional Nursing. 2005;21:335-344
  99. 99. Peterson ED, Bynum DZ, Roe MT. Association of evidence-based care processes and outcomes among patients with acute coronary syndromes: Performance matters. The Journal of Cardiovascular Nursing. 2008;23(1):50-55
  100. 100. Black AT et al. Promoting evidence-based practice through a research training program for point-of-care clinicians. The Journal of Nursing Administration. 2015;45(1):14-20
  101. 101. Nieva V, Sorra J. Safety culture assessment: a tool for improving patient safety in healthcare organizations. Quality and Safety in Health Care. 2003;12(suppl 2):ii17-ii23
  102. 102. Shojania KG et al. In: Markowitz AJ, editor. Making Health Care Safer: A Critical Analysis of Patient Safety Practices. University of California at San Francisco (UCSF). San Francisco, California: Stanford University Evidence-based Practice Center; 2001
  103. 103. Melnyk BM, Fineout-Overholt E. Evidence-Based Practice in Nursing & Healthcare: A Guide to Best Practice. Ambler, PA: Lippincott Williams & Wilkins; 2011
  104. 104. Pravikoff DS, Tanner AB, Pierce ST. Readiness of US nurses for evidence-based practice: Many don’t understand or value research and have had little or no training to help them find evidence on which to base their practice. AJN The American Journal of Nursing. 2005;105(9):40-51
  105. 105. Brooks JM et al. Effect of evidence-based acute pain management practices on inpatient costs. Health Services Research. 2009;44(1):245-263
  106. 106. Stawicki SP, Firstenberg MS. Introductory chapter: The decades long quest continues toward better, safer healthcare systems. In: Vignettes in Patient Safety. Vol. 1. Rijeka, Croatia: InTech; 2017
  107. 107. Brown JA. The Janet A. Brown Healthcare Quality Handbook, 29th Edition. 29th ed. Pasadena, California: JB Quality Solutions, Inc.; 2016
  108. 108. AHRQ. Agency for Healthcare Research and Quality: AHRQ Quality Inidicators. 2018. Available from: http://www.qualityindicators.ahrq.gov/ [Accessed: February 2, 2018]
  109. 109. Kwaan MR, Melton GB. Evidence-based medicine in surgical education. Clinics in Colon and Rectal Surgery. 2012;25(3):151-155
  110. 110. Boyd J, Wu G, Stelfox H. The impact of checklist on inpatient safety outcomes: A systematic review of randomized controlled trials. Journal of Hospital Medicine. 2017;12:675-682
  111. 111. Smith E et al. Surgical safety checklist: Productive, nondisruptive, and the “right thing to do”. Journal of Postgraduate Medicine. 2015;61(3):214
  112. 112. Haugen AS et al. Effect of the World Health Organization checklist on patient outcomes: A stepped wedge cluster randomized controlled trial. Annals of Surgery. 2015;261(5):821-828
  113. 113. Wachter RM. Patient safety at ten: Unmistakable progress, troubling gaps. Health Affairs. 2009;29(1):165-173
  114. 114. Salzwedel C et al. The effect of a checklist on the quality of post-anaesthesia patient handover: A randomized controlled trial. International Journal for Quality in Health Care. 2013;25(2):176-181
  115. 115. Fudickar A et al. The effect of the WHO Surgical Safety Checklist on complication rate and communication. Deutsches Ärzteblatt International. 2012;109(42):695
  116. 116. Nicolini D, Waring J, Mengis J. Policy and practice in the use of root cause analysis to investigate clinical adverse events: Mind the gap. Social Science & Medicine. 2011;73(2):217-225
  117. 117. Nguyen MC, Moffatt-Bruce SD. What's new in academic medicine? Retained surgical items: Is “zero incidence” achievable? International Journal of Academic Medicine. 2016;2(1):1
  118. 118. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: Understanding time lags in translational research. Journal of the Royal Society of Medicine. 2011;104(12):510-520
  119. 119. Brownson RC et al. From the Schools of Public Health. Public Health Reports. 2006;121(1):97-103
  120. 120. Grol R, Grimshaw J. From evidence to best practice: Effective implementation of change in patient’s care. Lancet. 2003;362:1225-1230
  121. 121. Randle J, Arthur A, Vaughan N. Twenty-four-hour observational study of hospital hand hygiene compliance. Journal of Hospital Infection. 2010;76(3):252-255
  122. 122. Nunan D et al. Ten essential papers for the practice of evidence-based medicine. Evidence-based Medicine. 2017. DOI: 10.1136/ebmed-2017-110854. http://ebm.bmj.com/content/early/2017/11/23/ebmed-2017-110854.citation-tools
  123. 123. De Leeuw E et al. It’s Research, Jim, but Not as we Know it. Acting at the Nexus. Integration of Research, Policy and Practice. Geelong: Deakin University; 2007
  124. 124. Atkins D et al. Grading quality of evidence and strength of recommendations. BMJ (Clinical Research Ed.). 2004;328(7454):1490-1490
  125. 125. Grol R, Grimshaw J. From best evidence to best practice: Effective implementation of change in patients' care. The Lancet. 2003;362(9391):1225-1230
  126. 126. George DeVries J, Berlet GC. Understanding levels of evidence for scientific communication. Foot and Ankle Specialist. 2010;3:205-209
  127. 127. Pelletier LR, Beaudin CL. Q Solutions: Essential Resources for the Healthcare Quality Professional. Glenview, IL: National Association for Healthcare Quality; 2008
  128. 128. Rotter T et al. Clinical pathways: Effects on professional practice, patient outcomes, length of stay and hospital costs. In: The Cochrane Library. 2010
  129. 129. Twaddle S, Qureshi S. Scottish intercollegiate guidelines network. Evidence-Based Healthcare and Public Health. 2005;9(6):405-409
  130. 130. SIGN. The Scottish Intercollegiate Guidelines Network (SIGN): Who We Are. 2018. Available from: http://www.sign.ac.uk/who-we-are.html [Accessed: January 18, 2018]
  131. 131. Moher D et al. The CONSORT Statement: Revised Recommendations for Improving the Quality of Reports of Parallel-group Randomised Trials. Lancet. 2001;357:1191-1194
  132. 132. Simera I et al. A catalogue of reporting guidelines for health research. European Journal of Clinical Investigation. 2010;40(1):35-53
  133. 133. Gagnier JJ et al. The CARE guidelines: Consensus-based clinical case report guideline development. Journal of Clinical Epidemiology. 2014;67(1):46-51
  134. 134. Moher D et al. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine. 2009;6(7):e1000097
  135. 135. Schünemann HJ et al. Letters, numbers, symbols and words: How to communicate grades of evidence and recommendations. Canadian Medical Association Journal. 2003;169(7):677-680
  136. 136. Andrews J et al. GRADE guidelines: 14. Going from evidence to recommendations: The significance and presentation of recommendations. Journal of Clinical Epidemiology. 2013;66(7):719-725
  137. 137. Schünemann HJ et al. Rating quality of evidence and strength of recommendations: GRADE: Grading quality of evidence and strength of recommendations for diagnostic tests and strategies. BMJ: British Medical Journal. 2008;336(7653):1106
  138. 138. Balshem H et al. GRADE guidelines: 3. Rating the quality of evidence. Journal of Clinical Epidemiology. 2011;64(4):401-406
  139. 139. Wikipedia. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. 2018. Available from: https://en.wikipedia.org/wiki/The_Grading_of_Recommendations_Assessment,_Development_and_Evaluation_(GRADE)_approach#cite_note-1 [Accessed: March 22, 2018]
  140. 140. Glasziou P, Ogrinc G, Goodman S. Can evidence-based medicine and clinical quality improvement learn from each other? BMJ Quality & Safety. 2011;20(Suppl 1):i13-i17
  141. 141. Sackett DL et al. Evidence Based Medicine: What It is and What It isn't. British Medical Journal. 1996;312:71?72
  142. 142. Guyatt G, et al. Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. Vol. 20. Chicago, IL: AMA Press; 2002
  143. 143. Satterfield JM et al. Toward a transdisciplinary model of evidence-based practice. The Milbank Quarterly. 2009;87(2):368-390
  144. 144. Fukui T. Patient safety and quality of medical care. Editorial: From Evidence-based Medicine to PDCA Cycle. The Journal of the Japanese Society of Internal Medicine. 2012;101(12):3365-3367
  145. 145. Saxena S, Ramer L, Shulman IA. A comprehensive assessment program to improve blood-administering practices using the FOCUS–PDCA model. Transfusion. 2004;44(9):1350-1356
  146. 146. Graban M. Lean Hospitals: Improving Quality, Patient Safety, and Employee Engagement. Boca Raton, Florida: CRC Press; 2016
  147. 147. Bushell S. Implementing plan, do, check and act. The Journal for Quality and Participation. 1992;15(5):58
  148. 148. Dahm P, Dmochowski R. Evidence-based Urology. Somerset, NJ: John Wiley & Sons; 2010

Written By

Maryam Saeed, Mamta Swaroop, Daniel Ackerman, Diana Tarone, Jaclyn Rowbotham and Stanislaw P. Stawicki

Submitted: 06 February 2018 Reviewed: 26 March 2018 Published: 05 September 2018