Open access peer-reviewed chapter

Errors in Surgical Pathology Laboratory

Written By

Monique Freire Santana and Luiz Carlos de Lima Ferreira

Submitted: 17 September 2017 Reviewed: 05 December 2017 Published: 22 August 2018

DOI: 10.5772/intechopen.72919

From the Edited Volume

Quality Control in Laboratory

Edited by Gaffar Sarwar Zaman

Chapter metrics overview

2,201 Chapter Downloads

View Full Metrics

Abstract

Pathology must aim at a correct and complete diagnosis for the patient, which is timely, useful, and understandable to the physician assistant. However, in daily practice, there are multiple possibilities of errors in the pathology laboratory, with several impacts on patient care and prognosis. In this chapter, we discuss the different concepts of error and diagnostic concordances in pathology, at which point in the diagnostic process the errors are more frequent, and propose solutions to minimize the chance of their occurrence.

Keywords

  • medical errors
  • surgical pathology
  • pathology errors

1. Introduction

In 1999, the American Institute of Medicine (now the National Academy of Medicine) published the paper “To Err is Human: Building a Safer Health System” [1], which broadly defines medical error as the inability to complete a planned action or the use of a wrong plan to achieve a goal. Sirota summarizes the document and its implications for pathology. In his chapter, the author considers that the efforts of professional societies, such as the College of American Pathologists (CAP), through the Laboratory Accreditation Program, as well as their councils and commissions, determine the quality standards for the practice of pathology. In professional training, the academic programs and the American Board of Pathology, with their certification mechanism, help to ensure the full competence of the practice of pathology [2].

The year 1989 saw the most famous quality control initiative when the CAP introduced the Q-PROBES Program, which defines quality in terms of practices of laboratory medicine and anatomic pathology. At the same time, 118 Q-PROBES studies have been conducted in thousands of hospitals and independent laboratories in the USA, other places from North America, and abroad to identify and describe various experiences. These studies investigate the frequency of errors occurrence: the laboratory participants submit data from the calculation of the normative rates of errors during the laboratory tests. This exchange of information occurs so this knowledge convinces laboratory to abandon practices and behaviors harmful in the process of laboratory tests [3].

Some reasons may explain less attention to errors in medical labs when compared to other medical errors. The higher variability in error during laboratory testing, difficulties in screening all errors, and all steps involved in the total testing process (TTP) can help explain these facts. Besides that, the TTP is more complex and needs cooperation between several health institutions. Surprisingly, physicians and other interested people do not understand full aspects about the harmfulness of errors in laboratory medicine. In addition to that, it is undesirable for laboratory professionals to report and disclose data about errors [4].

The errors in pathology laboratory are so common that in a self-administered mailed survey with 260 practicing pathologists and 81 academic hospital laboratory medical directors, approximately 95% suggested the involvement of any error and only 48% of that professionals believed that current error reporting systems were adequate. Among the factors that might make it less likely that they would disclose a serious error to a patient, the most common was the possibility that the patient would not understand what he or she was being told (n = 84, 49.7%) and the physician would not be able to explain the error clearly to the patient (n = 68, 40.2%), according to the interviewees’ perception. The majority of participants believed that minor and near misses needed to be disclosed to patients (n = 120, 72.3% and 34, 20.1%, respectively) [5].

Troxel stands that an expectation from the society of “zero diagnostic error” and the “zero error standard” supported by the US judicial system is unattainable for obvious reasons (6). Surgical pathology laboratory process is much more complex than highly mechanized processes with minimal human participation, such as clinical laboratory analysis. Meier [6] describes the pathology production process in 12 steps. The production process begins with the correct identification of the patients’ samples (1), selecting tissue specimens (2), labeling and transport (3), and accession (4). The process continues with the description of steps involving receipt and sampling of specimens (5), fixing, embedding, and cutting section (6), mounting, staining the slides, and labeling them (7), and delivering them to surgical pathologists (8). The process continues at the pathologist’s desk—with examining, collating, and interpreting slides (9) and examining the possibilities of ancillary tests or other information (10), the composition of reports, (11) and finally the receipt and interpretation of the report (12). Therefore, the surgical pathology report is a complex task with multiple steps in which there is a possibility of error.

Meier et al. [7] proposed a standardized error classification that until then did not exist in pathology. We describe four types of errors (defective identification, defective specimen, defective interpretation, and defective report), distributed according to the processing step in the laboratory. In the pre-analytic phase, they describe defective identification (patient, tissue, laterality, anatomic location) and defective specimen (loss of the specimen, erroneous in measurement or gross description, floaters, inadequate sampling, and the absence of indication of ancillary studies when necessary). The analytic phase includes errors in classification, false negative or positives, and in post-analytic phase, and they describe the defective reports (erroneous or missing nondiagnostic information, error in dictation or typing, report delivery and errors related to computer or format, transmission and upload error). During the pre-analytic phase, wrong identification can be responsible for 27–38% of the errors, and specimens-related errors vary from 4 to 10%. In the analytic phase, diagnoses misinterpretation occurs from 23 to 28% of the errors, and in post-analytic phase, the defective report included from 28 to 48% of the errors. This proposed error taxonomy has shown a very good interobserver agreement of 91.4% (k = 0.8780; 95% confidence limit, 0.8416–0.9144) when applied to amended reports.

Advertisement

2. Diagnostic errors and concordances in pathology

To discuss the errors in pathology, it is essential to conceptualize their goals. Pathology should provide a correct and complete diagnosis, in other words timely, useful, and understandable for the attending physician [8]. Since the goals of pathology are multifaceted, it is easy to understand that there are multiple possibilities for error. The result must be accurate, based on gold standards, and scientifically validated. But what is the gold standard of pathology? Morphology is subjective and affected by the observer’s experience. Cytogenetic studies by in situ or molecular hybridization are not applicable to most diseases routinely found in surgical pathology. Therefore, the most appropriate is to determine the accuracy, as a measure of diagnostic adequacy; it suggests that most of the qualified pathologists will agree on a similar diagnosis when analyzing the same specimen. A major or unacceptable variation is the one that will have a great effect on therapy or prognosis, such as in classifying a benign tumor as a malignant one. A smaller, acceptable, or minor variation is the one that has no effect on the treatment that would alter the progression of the disease, with no effect on the prognosis, such as in some subclassifications of benign or malignant tumors. These definitions can be applied to the three pathology goals (correct, complete, timely) [8, 9]. The errors can be further divided into errors of accuracy, that is, how much of the released diagnosis represents the true pathological process and precision errors related to concordances among pathologists in the interpretation of a case [9].

Meier et al. [10] divided the errors of pathological reports into four categories: errors of interpretation, identification, the specimen, and related to the report. A study based on this classification evaluated 73 participating institutions of Q-PROBES with 1688 errors in 360,218 cases of surgical pathology, with a ratio of 4.7 errors/1000 cases. Rates were higher in institutions with pathology residency programs (8.5 vs. 5.0/1000, p = 0.01) or when a percentage of cases were reviewed after release (6.7 vs. 3.8/1000, p = 0.10). Interpretation errors were responsible for 14.6% of the cases, 13.3% were identification errors, 13.7% were related to specimen errors, and 58.4% errors were of other modalities. In general, more errors were detected by pathologists (47.4%) than by clinicians (22%). Incorrect interpretations and specimen errors were detected by pathologists (73.5% and 82.7%, respectively, with p = 0.001), while identification errors were more frequently detected by other physicians (44.6%, p = 0.001). The rates of identification errors were lower when the reports were reviewed by a second pathologist prior to their release (0.0 vs. 0.6/1000, p < 0.001), and errors related to the specimen were less reported when released after an intradepartmental review of more difficult cases (0.0 vs. 0.4/1000, p = 0.02) [11].

Meier [6] describes why the comparison of discrepancy rates is difficult in six different steps between the initial diagnostic event and the review event. The first is the difference in the internal and external review. In the internal review, the diagnoses under scrutiny were originally performed in the laboratory, and pathologists in other practices performed an external review. Second, the pre-sign-out review was held before a report was issued. Post-sign-out review happened after the report had been released. Third, in conference reviews, several experts discussed information about diagnosis, prognosis, and treatment of the patient to reach an agreement. Some reviews were nonconference related. Fourth, in review performed by an expert, the exam was conducted by a specialist with extensive experience and knowledge in the field. The fifth difference was blinded and nonblinded reviews. In blinded revisions, the second pathologist had the same amount of information as the first one, and sometimes a blinded reviewer was given less case-specific information. The last difference was between focused reviews in which the examiner trained the diagnosis of specific types of cases and nonfocused revisions in which the pathologist evaluated a defined fraction of cases of various specimens or types of diagnoses.

Advertisement

3. Where is the possibility of error?

Valenstein and Sirota [12] described four classifications of errors:

  1. Depending on the scenario in which error occurred, in pre-laboratory errors (identification errors external to reference laboratory) and laboratory errors. The second form of this classification is the division of errors into pre-analytic, analytic, and post-analytic errors. This is the most common classification of errors, based on the time and place of the laboratory where they occurred: in the pre-analytical, analytical, and post-analytical phases. This division is commonly used in clinical analysis laboratories and, since they are based on similar work processes, they may be used to evaluate work in pathology.

  2. Consequences for the patient: in this case, the errors are distributed in near misses (or “close calls”), when the error is detected before causing harm to the patient. On the other hand, adverse events damage the health of a patient, such as a new biopsy or unnecessary procedure. Sentinel event is serious, which may cause permanent disability or death because of errors.

  3. Type of error: patient misidentification or specimen misidentification.

  4. Cause of error: based on the root cause of identification errors—human factors, environment, equipment failure, and lastly defective rules, policies, or procedures [12].

In a study to develop a reproducible amendment taxonomy, Meier et al. [13] described a classification in four categories: misinterpretations, misidentifications, defective specimens, and defective reports.

  1. Misinterpretations: This category is divided into three subtypes that occurred in relation to two levels of diagnostic information. In the first subtype, the diagnostic conclusions described inaccurate information (false-positives or overcalls). In the second subtype, the pathologist failed to recognize or lost accurate information (false-negatives or undercalls). Both can occur at primary (such as changes between positive and negative status or between malignant and benign diagnosis) and secondary levels of diagnosis. The secondary-level diagnosis refers to when the clinical context or prognostic implications depend on the pathologic diagnosis, which occurs in malignant tumors.

    The third subtype is misclassifications that occur when the pathologist changes similar diagnostic categories, for example, the names of a soft tissue sarcoma, without primary diagnostic implications or secondary diagnostic information’s modifying impact (the differently labeled sarcoma behaved biologically with the same degree of aggressiveness during the same treatment).

  2. Misidentifications: contained four subtypes—patient identification (lacking or wrong); tissue designation (e.g., lung confused with liver); laterality specification; and anatomic localization (e.g., skin of head misidentified as skin of hand).

  3. Specimen defects included five subtypes: lost specimens, specimens with inadequate sample volume or size, samples with absent or discrepant measurements, inadequately representative sampling, and samples with absent or inappropriate ancillary studies.

  4. Report defects: Defects of three subtypes were observed. In the first subtype, missing or erroneous non-diagnostic information about practitioners involved in the case, procedure or dates in which the specimen was collected, or codes regarding the patient, procedure, or diagnosis, and so on. The second subtype may be dictated or typographical errors. Failure or aberrations in electronic formats or in the transmission of information in reports was considered the third subtype of error.

During the material reception, gross examination, and processing, there are many possibilities of error, from the exchange of samples or labels, absence or excessive cuts in the block, to cross-contamination with tissues foreign to the specimen included in the final slide. Cognitive errors, such as inadequate or incomplete macroscopic descriptions, inadequate representation of the lesion or of relevant areas necessary for its characterization, may also occur, and although some are beyond the pathologist’s control, the responsibility falls directly on him, with very serious damage to the patient [8].

Morelli et al. [14] described critical points in pre-analytical steps in a pathology laboratory of a leading hospital in Lombardy, Italy. In this work, 8346 histological cases were reviewed, for which 19,774 samples were made and from which 29,956 histologies were prepared. They identified 132 errors, such as accessioning (6.5%), gross dissecting (28%), processing (1.5%), embedding (4.5%), tissue cutting and slide mounting (23%), coloring, (1.5%), labeling, and releasing (35%). Some very common errors were not detected in this work: specimen mismatching and sample contamination in gross room; mismatching or loss of specimen in embedding, loss, exhaustion, or contamination of specimen; and damage or changes of samples on the slides in cutting and slide mounting. As expected, 98.5% of the errors were due to a lack of attention, and the majority had no consequences for the patient (88%). Only 10% of the errors resulted in a delayed report to the physician. Overall, 85% of errors were detected during gross dissecting, tissue cutting or slide mounting, labeling, and releasing, and 80% of errors could be due to incorrect transcriptions of container identification, on slides, and on labels applied to the slides at the time of delivery. The quality of the slides is a prime factor for the correct diagnosis. In some cases, problems in the cutting, staining, or assembling of the slides can completely prevent an adequate diagnosis (Figures 1 and 2).

Figure 1.

The inappropriate cut makes it impossible to evaluate the cellularity of this biopsy (Bone marrow, H&E, 400x).

Figure 2.

The presence of folding in tissue does not allow adequate observation of the morphological characteristics (Bone marrow, H&E, 400x).

A study carried out in Pennsylvania, in a teaching hospital with Pathology residency training, identified 491 errors. Of these, 88% (n = 432) of errors were found in the pre-analytical phase, in terms of the order, identification, collection, transportation, material reception, and processing in the laboratory. The authors identified 20% (n = 4) of analytical and 39% of (n = 8) post-analytical errors [15], as shown in Table 1, associated with Tosuner [16] survey data.

Preanalytical phase1,2: 53.3 [22] to 88% [21]
Deliver and registration of material
Incomplete/error in order
Order does not correspond to specimen
Sample quantity does not correspond to order
Specimen without previous marking/incorrect orientation
Incorrect anatomical site
Incomplete/inaccurate clinical information
No material in sample sent
Inappropriate packaging/fixing conditions
Specimen loss in laboratory
Integrity not preserved
Malfunction of equipment
Freezing error
Register error
Analytical phase: 4 [21] to 42.1% [22]
Quality of the slides
Repetition of coloration
Foreign tissue in the specimen
Incorrect block identification
Interpretation errors
Delayed results
Work environment (e.g., refrigeration failure and other equipment failures)
Postanalytical phase: 5.6 [21] to 8% [22]
Correlation errors of freezing biopsy with conventional histology
Specimen discarded during routine examination
Patients exchange
Transcription errors
Delayed results
Malfunction of laboratory information systems

Table 1.

Distribution of errors according to the operating process phase and examples.

Preanalytical phase include accessioning, gross dissecting, processing, embedding, tissue cutting, mounting, coloring, labeling and releasing slides. Some errors outside of laboratory were included in this category for didactics effects, such as identification mislabeling, loss of specimen etc., because these errors may occur in or out of laboratory. Besides that, some errors (e.g., contamination or loss of specimen) can happen in several steps inside the laboratory, since gross dissecting, embedding or tissue cutting until slide mounting.


Another preanalytical errors describe for Morelli et al. [20] include specimen wrongly accessioned, incorrect numbering of the blocks or slides, decalcification not performed or insufficient, error in procedure temperature, specimen badly positioned, number was reported incorrectly in block or slide, error in thickness selection and loss or exhaustion of specimen in cutting, wrong coloring (manually) or error in the choice of the program (in automatic coloring).


It is important to emphasize that the risk of loss or exchange of the specimen is critical in the pre-laboratory stage, from the moment of its collection, registration, gross description, and confection of the slide. Morelli et al. [14] described additionally in pre-laboratory phase: the presence of extraneous tissue (ET), mistaken specimen, excessive number of containers in gross dissecting, the absence of decalcification of the specimen when necessary, loss or exhaustion of specimen in tissue cutting, wrong choice for thickness section, error in identification of block to be cut, and others.

Some pre-analytical artifacts are caused by improper manipulation during the biopsy procedure. Excessive tissue trauma caused by tweezers and other surgical instruments (Figures 3 and 4), as well as the excessive use of electrocautery in the surgical margins, provoke artifacts that may lead to the need for a new biopsy collection.

Figure 3.

The excess of crushing at the time of biopsy collection makes it impossible to properly evaluate the cellular morphology in this bone marrow (Bone marrow, 100x, H&E).

Figure 4.

In contrast, in adequate sampling, it is possible to define the morphology of the cellular activity with perfection (Bone marrow, 400x, H&E).

Layfield and Anderson [17] evaluated the sample labeling errors in 29,479 cases associated with 109,354 blocks and 248,013 slides for 18 months. In identification errors, a sample was labeled with the incorrect name or identification number. In the case of samples pertaining to identification errors, a specimen was incorrectly identified as to the site of origin at the time of collection. The authors identified 75 errors; of which 55 (73%) were related to the patient’s name and 18 (24%), to the anatomical site. Most of the mistakes (69%, n = 52) occurred in the gross examination room, 19 (25%) in the histology laboratory, and four (6%) were related to the pathologist’s errors. From the errors, 73% (n = 55) resulted in slides assigned to noncorresponding patients. Most of the identification errors occurred in skin, esophagus, kidneys, and colon biopsies, reflecting the distribution of types of cases received in surgical pathology, with small samples from endoscopy and dermatology.

Bixenstine et al. [18] observed 69 hospitals in 3 months and described identification defects in 2.9% of cases (1780/60,501; 95% confidence interval [CI] = 2.0–4.4%), 1.2% of containers (1018/81,656, 95% CI = 0.8–2.0%), and 2.3% of requisitions (1417/61,245, 95% CI = 1.2–4.6%). In container defects, the authors included missing specimen, container with no identified or misplaced label, absence or incorrect numeric patient identifier, absence of specimen type or source, and/or incorrect specimen type or source (or laterality). Requisition defects included the absence of requisition (or a blank requisition), date, time, name, specimen source/type, laterality, and/or numeric identifier (or when this information was wrong).

We routinely observe the widespread use of inadequate containers, too small for the specimen, which make it difficult to withdraw. It is recommended that containers can be used to allow the material to flow without deformities. Some deformities are caused by the narrow fit of the part in the container, which prevents its proper fixation. In addition, the bottle should contain 10–20 times the volume of the piece in a fixative solution and the specimen.

In the cases of small biopsy, the risk of change in gross pathology is more dangerous. Sometimes histology shows evidence of suspicious exogenous tissue sample, such as tumor cells with nuclear inclusions similar to arachnoidal cells in an endometrial sample, associated with the presence of eosinophilic amorphous material morphologically similar to secretory meningioma. Some techniques can be helpful to identify mixed-up tissue specimens, such as microsatellite PCR techniques and another [19, 20].

In an accessioning, many errors can occur. For example, the use of Roman numerals for labeling sample bottles can lead to confusion when the numbers 3 and 4 (III and IV, respectively) cannot be distinguished clearly. In other cases, the extravasation of formalin or another fixation solution can clear the identification in the biopsy bottle. It becomes more critical when there are several biopsies of the same patient from different anatomic places. In some cases, only the precise information in the request form can make the pathologist think of a possible mix-up of species. Besides that, the identification in the laboratory is critical as well. Even when clearly written, the numbers for slide identification can lead to confusion, such as when the lower horizontal bar of the number 2 on the middle slide is rather short and can be mistaken as number 7 [21].

In gross macroscopic examination, cutting or staining of the slides, contaminants can be a rise, often called “floaters” by laboratory staff, and most of the time it is easily recognized as such. However, contamination of patient samples by strange tissues of a similar type may represent a higher risk for misinterpretation, as in the cases in which malignant tissue fragments are found in biopsies from patients without malignancy. Carpenter [22] described that the first opportunity for this error occurs during gross examination and dissection and that some specimen types that are considered high risk for cross-contamination: esophageal biopsies, endocervical curettage specimens, and lymph nodes biopsied for metastatic malignancy. For example, contamination of an esophageal biopsy by a very small fragment of normal tissue from the small intestine or colon may lead to a false-positive diagnosis of Barrett’s esophagus or, worse, when the contamination occurs by a fragment of atypical or “dysplastic” intestinal epithelium that may lead to a false interpretation of Barrett’s esophagus with “dysplasia.” In these cases, the productivity of the entire laboratory decreases until the pathologist discovers the source of contamination because of the longer evaluation time and the need to deepen the histological sections. This risk is foremost in laboratories that specialize in one area of the anatomic pathology (e.g., dermatopathology, gastrointestinal pathology, etc.) because most of the specimens are of a similar type, making it difficult to recognize the floaters. In a laboratory where prostate biopsies are exclusively evaluated, a little fragment of the prostate is less likely to be identified as extraneous. To reduce this risk, it is essential that a gross station stay clean and organized.

The tissue floaters can be found in histology water baths and the slide stainers. In a study performed by Platt et al. [23], extraneous tissue found in stain bath, ranging in size from two to three cells to hundreds of cells, and the principal source of contamination was represented for the first sets of xylenes and alcohols. Of 13 water baths examined, only one fragment of tissue was identified.

In the largest study of extraneous tissue (ET) in surgical pathology, with data about 275 laboratories included in Q-Probes, the quality program of CAP describes the frequency of ET in two steps: a prospective and retrospective slide review. An extraneous tissue rate of 0.6% of slides (2074/321757) in the retrospective study and 2.9% of slides (1653/57083) was detected. In 0.4 and 0.1% in the prospective and retrospective phase, respectively, the presence of ET caused difficulties in the diagnostic conclusion [24].

Deficiencies in pre-laboratory steps can occur as well. In a study with 417 laboratories in the College of American Pathologists’ voluntary quality improvement program (Q-Probes) identification and accessioning deficiencies were found in 60,042 (6%) out of a total 1,004,115 cases accessioned (median deficiency rate of 3.4%). Identification of specimen was done incorrectly in 9.6, 77% errors in discrepant or missing information, and 3.6% involved specimen handling. Absence or incomplete clinical history or diagnosis on the requisition slip represented 40% of all deficiencies. A correction was done in 69% of cases involving specimen identification errors, 58% of correction was done in specimen handling errors, and 27% of cases with discrepant or missing information. Lower rates of deficiencies were identified in laboratories with lower numbers (<15,000) of accessioned cases and laboratories with a formal written plan for the detection of this type of errors [25].

Analytical errors generally have greater evidence of impact on patient care, with potentially devastating consequences for them and the responsible pathologist. Troxel [26] reviewed records of lawsuits against pathologists for diagnostic negligence at a US insurance company responsible for the insurance of 1100 pathologists. The pathology presented a low frequency of complaints (8.3% per year) and, however, with a great financial impact, measured by a number of indemnities paid per claim since many claims against pathologists resulted from the lack of diagnosis. False-negative and false-positive results for cancer accounted for 63 and 22% of claims, respectively. The highest values were related to diagnostic errors in melanomas (US$757,146; 95% false negatives), cervicovaginal cytology (US$686,599; 98% false negatives) and breast cancers (US$203,192, with the same proportion of false negatives and positives). Also with respect to analytical errors, Genta [27] argued that there are external or “suprahistological” elements that interfere with the pathologist’s decision which can be divided into two categories: the evidence-based ones (such as age, sex, ethnicity, and epidemiology) and the elements that arise from emotional perceptions, not rooted in objective evidence, named emotional elements, directly related to inter and intra-observer variability. Faced with a colon adenoma with high-grade dysplasia, the pathologist may believe that surgeons will interpret the presence of dysplasia as a license for an unnecessary surgical resection and feel inclined to omit such information from the report. Even the errors of pathologists, when discovered, may modify their decision-making behaviors. Biases such as visual anticipation, first impression, and preconceived judgments influence the critical decision-making processes [28]; however, to what extent such elements may interfere with the pathologist’s diagnostic decision-making is uncertain.

It is known that it is strongly recommended that pathologic diagnosis has the following characteristics: (1) accuracy and precision of report, (2) completeness of report, and (3) timeliness. The accuracy is based on scientifically validated gold standards, and it can be difficult since most of the diagnoses do not have this pattern in morphological analysis. The pathologic diagnosis depended on interpretative and subjective skills. The precision is a measurement of variation, and a minimal interobserver variation is a major goal in pathology diagnosis [29]. In a review of 344 pathology claims reported to The Doctors Company from 1995 to 1997, Troxel identified 218 claims related to surgical pathology; of these, 54% represented claims in six groups of specimen type or “high-risk” diagnostic areas, which included breast biopsy, melanoma, lymphoma, fine-needle aspiration, frozen section, and prostate biopsy. False-negative diagnosis of malignancy represented 52% of these claims, and 33% of these were false-positive diagnosis [30].

In Pakistan, Ahmad et al. [31] performed a study to describe the frequency and types of error in surgical pathology reports. They found errors in 210 cases (0.37%) after analyzing 297 reports during the study conducted on 57,000 surgical pathology cases in a laboratory in Karachi in 2014. These comprised 199 formalin-fixed specimens and 11 frozen sections represented as 3.8% of a total of 2170 frozen sections. Of this—11 frozen section errors—10 were misinterpretations and the most comprised malignant diagnosis in the central nervous system. Of the 199 permanent specimens, 99 (49.7%) were misinterpretations, and the most common subspecialty/anatomic location was gastrointestinal tract (including liver, pancreas, and biliary tract) with 23.2% (n = 23), followed by breast (n = 13, 13.1%), and lungs, pleura, and mediastinum (n = 10, 10.1%). Some cases of misinterpretations occur as a failure to perform special stains, such as Periodic acid-Schiff stain not done in cases of the nasal polyp with fungal hyphae. Other errors occur by inadequate gross macroscopy examination when the pathologist did not select appropriate sections for microscopic examination. In these cases, lymph node compromised by cancer, a polyp in the gallbladder, and breast carcinomas are not described in the first macroscopic description. These errors delay delivery results because they require a new specimen exam.

Delays in the report release may be considered as an error in the post-analytical [15] or analytical phase [16], and the turn-around time (TAT) should be used as an important quality measure in laboratories [32]. It is not uncommon for the pathologist to miss the perception that there is a patient waiting for his results; therefore, the cases should not remain for longer than necessary on the pathologist’s desk [33]. Delays in TAT may be considered during the pre-analysis as delays in reception, gross examination, and material processing; during the analysis (in the diagnostic interpretation of the pathologist) or after the analysis, as the delay in typing and release of the reports to the patient. In a study performed with 713 cases of surgical pathology, 551 (77%) were released in 2 days and 162 (23%) in 3 days or more. From these, the majority of these cases were found to be pertaining to lungs, gastrointestinal tract, breasts, and samples of the genitourinary tract. Diagnosis of malignancy (including staging), consultations with other pathologists, freezing, and immunohistochemical analysis were associated with increased TAT in univariate analysis. In the multivariate analysis, the consultation with other pathologists, the diagnosis of malignancy, the use of immunohistochemistry, and the number of slides evaluated (11.3 when TAT > 2 days and 4.8 when TAT ≤ 2 days) remain as significantly associated with increased TAT. Despite CAP recommendation of an analytical response time of 2 days or less for most routine cases, the authors conclude that cancer care institutions should have a TAT longer than other services [34].

In post-analytical phase, errors include typographical errors, and in some cases, it can lead to catastrophic consequences, when the expression “cancer is present” instead of “cancer is not present.” Another error in this phase included erroneous or missing non-diagnostic information, computer formatting, or transmission [29]. Besides that, some expressions can lead to confusing interpretations. It is broadly used in some expressions or phrases to communicate varying degrees of diagnostic certainty, for example, “cannot rule out,” “consistent with,” “highly suspicious,” “favor,” “indefinite for,” “suggestive of,” and “worrisome for.” Lindley, Gillies, and Hassell evaluated 1500 surgical pathology reports and found 35% of these expressions, with wide variation in the percentage of certainty clinicians assigned to the phrases studied. The most commonly used phrases were “consistent with” (50%) and “suggestive of” (39%). The authors believe that the reasons for use for this expression may include contradictory or low probability staining results, inconsistency in clinical data, uncertain criteria in the medical literature, quantity of sample or abnormality, and possibly a concern with medicolegal consequences for an over- or under-diagnosis.

Nakhleh and Zarbo describe the amended reports from 359 laboratories, 96% of the USA, participants in the 1996 Q-Probes quality improvement program of the College of American Pathologists. A total of 3147 amended reports from 1,667,547 surgical pathology specimens accessioned in the study. They describe a median of amended reports was 1.5/1000 cases; of these, 19.2% were issued to correct patient identification errors, 38.7% to change the originally issued final diagnosis, 15.6% to change a preliminary written diagnosis, and 26.5% to change clinically significant information other than the diagnosis. The error detection was most commonly precipitated for a request from a clinician to review a case (20.5%) [35].

Advertisement

4. Looking for solutions

Perkins [36] considers that the disclosure of errors in pathology is complicated by factors intrinsic to the specialty. The first barrier, as already mentioned, is the definition of error. Another concern is that the patient does not understand the nature of the error or even that the clinician is unable to explain it adequately to the patient. Even more complex is the situation that involves the discovery of the error of another individual: when the pathologist or the head of the laboratory discovers an error of a technician/ pathologist in their laboratory or external laboratories, or even when the pathologist discovers an error of a clinician from the same organization. Therefore, when disclosing an error, the pathologist must consider the potential impact on their professional relationships. It is difficult sometimes to define an error because there exists a great variability in definitions used in the literature. The most commonly utilized is a classification in pre-analytical, analytical, and post-analytical phases, but we note that the errors can overlap between these categories. For example, change of specimen can occur in pre-analytical and analytical phase. Incorrectly described laterality or anatomic sites may occur in any step at the laboratory. Because of that, the comparison of studies in literature can be difficult, as the authors used different definitions in their studies. We described a risk assessment of laboratory errors in surgical pathology in a fishbone diagram (Figure 5).

Figure 5.

Risk assessment using a fishbone diagram.

One factor conferred to the increase in the number of medical errors is the excessive decentralization of patient care. Since the patient may have several professionals working in different contexts and none with access to the complete information, the physician would work in a situation of greater susceptibility to error [1]. The lack of complete information is critical in pathology, where many cases depend on correct, clear, and complete clinical information for adequate clinical-pathological correlation. In some cases, radiological or laboratory correlation is required. In soft tissue and bone neoplasms, it is important that the pathologist is able to interpret radiological exams. The correlation with laboratory data is fundamental for interpretation of hepatic biopsies and to define etiology of hepatitis.

In 2016, CAP, the Laboratory Quality Center, and the Association of Directors of Anatomic and Surgical Pathology convened a panel of experts to develop a guideline to help define the role of case reviews in surgical pathology and cytology. The main recommendations cited in the document, with strong agreement among the participants were: (1) pathologists should develop procedures for the evaluation of selected cases in order to detect divergences and possible interpretation errors, (2) pathologists should conduct case reviews timely to prevent negative impacts on patient care, (3) pathologists should have review procedures of cases relevant to their practice, as well as continuously monitor and document the results of case reviews, and (4) if case reviews show unsatisfactory concordances for a defined case type, the pathologists should take action to improve diagnostic compliance. The situation may become a little more problematic in places where only one pathologist is responsible for all cases; almost all published data refer to situations in which there is a second pathologist responsible for the review. The authors understand that there may be a value addition when the pathologist himself revises his cases in the second moment; however, there are not enough data in the literature. Each laboratory should develop written procedures and record the results of its departmental review studies. According to the authors, the causes for low agreement within and among anatomopathological groups are multiple, but two factors need to be discussed. Some diagnoses have intrinsically greater variation between observers, and these differences should be considered. Furthermore, the histological diagnosis is dynamic and different terminologies can be used for the same disease. If a poor interobserver agreement is evidenced, methods for improvement should be implemented, such as consensus conferences, images for comparison, and so on; however, the quality of evidence is very low regarding the best method of improvement. The authors consider that best practices may differ according to the characteristics of the disease, individual practices, and complementary tests available [37].

Smith and Raab [9] describe how to use the Lean A3 quality control method in surgical pathology. Under the Lean method, a management philosophy developed by Toyota Motor Corp., pathologists develop activities, that is, examination of slides, diagnostics, and preparation of reports from paths through the sequential flow of the sample, with connections, represented by the individuals with whom the pathologist communicates. At all stages, there is the possibility of error, and quality improvements should focus on repairing these failures. The A3 method is based on defining a problem, analyzing its causes, aiming at an ideal practice, and providing an improvement plan [9]. Other authors have also used industrial techniques, such as the Six Sigma, with excellent results in error reduction [16, 38]. Examples of their measures were as follows: meetings with the clinical teams responsible for delivering the material to correct the inadequacy of the samples and intradepartmental meetings, in which employees actively participated in the discussions about the errors and their solutions. In the pre-analytical phase, the authors established a double-check system of the material, with the work divided into successive stages, and at each stage, all specimens were listed and checked by two team members, from receipt to material processing, and were subjected to the supervision of a quality control unit [16].

In a review article by Ellis and Srigley [39], the authors emphasized the importance of structured and standardized reports for the improvement of diagnostic quality. Standardized reports can provide data that contribute to quality improvement programs in health care and, when combined with other health data sources, provide important information for monitoring, improvement, possible interventions, and benefit analyses in services offered to the population. The standardization of reports has proved to be particularly important in oncological diagnoses, which can generate much information with epidemiological impacts. The International Collaboration on Cancer Reporting maintains the guidelines and all the necessary parameters in the histopathological report at http://www.iccr-cancer.org/datasets to guide clinical management, as well as to provide prognostic information for several cancers; the guidelines panel is a result of a six-week public consultation conducted by a Dataset Authoring Committee, with multidisciplinary experts. Lehr and Bosman [33], in an article about the communication skills of pathologists, discourage the excess of additional notes on artifacts from improper pre-laboratory handling, such as incorrect fixation due to electrocautery, and so on. The authors advise that if the problems become recurrent, a letter to the material source services with guidelines may help to improve the specimens.

Nakhleh et al. [37] state that it is natural to wish to use data from case reviews to measure the quality of a pathology laboratory; however, now, it is not clear how best to interpret these results, which should not be used to compare the quality between two different laboratories. There are some limitations that may explain such facts: the sources of error, as well as their definitions, and the methods used for their measurement, which may differ between laboratories. Its clinical impacts may be different. The sensitivity of the evaluation method is not controlled and is unknown; in addition, the expected performance points are not well defined.

The errors in anatomical pathology have been screened in an internal assessment (review of diagnoses, correlation review of cytological and histological diagnoses, or between frozen section and permanent diagnoses, clinicopathological conference review of incoming cases, and intradepartmental cases or an intradepartmental consultation). The external assessment can be done across with regard to participation in quality assurance programs or medicolegal claims. Some authors recommended that two pathologists sign-out every cancer diagnosis [40]; however, this entails greater manpower, a luxury not enjoyed by a few laboratory [29].

Raab et al. [41] performed a nonconcurrent cohort study to compare the effectiveness and usefulness of error screening using a targeted 5% random review process (selected by a laboratory information system) and a focused review process. The last was performed in three subspecialties: gastrointestinal subspecialty, bone and soft tissue, and genitourinary pathology. In this study, pathologists reviewed 7444 cases using a targeted 5% random review process and 380 cases using a focused review process and describes 195 (2.6% of reviewed cases) and 50 (13.2%) errors detected by the procedures, respectively (p < 0.001). The focused review process detected approximately four times much more errors than the targeted 5% random review process, despite this last process involving the examination of almost 20 times the number of specimens. Major errors detected by the first process was 27 (0.36%) and 12 (3.2%) detected by focused review processes with statistical difference (p < 0.001). The authors concluded that the focused review detects a higher proportion of errors and may be more effective in strategies for errors screening.

In some cases, the pathologists consult extradepartmental experts to achieve the better diagnostic accuracy, and it is known that the diagnostic criteria vary according to the pathologist’s experience. For this reason, it is common to use the same expert for various pathologists. The principal limitation of this approach is the high selectivity of the cases because only extraordinary cases must be evaluated by other pathologists, and this does not exclude apparently routine cases that must be false-negative [7]. Besides that, the use of expert consultants does not exclude the legal responsibility for the first pathologist. In these situations, called “vicarious liability,” the first pathologist assumes legal responsibility for having chosen a negligent consultant [30].

Advertisement

5. Conclusions

Errors in Pathology laboratory can result in serious adverse patient outcomes, with catastrophic results. False-negative outcomes in oncologic diagnosis result in a dangerous delay in adequate treatment. As opposed, to false-positive diagnosis, the patient can be submitted to several unnecessary procedures, such as extensive surgical resections, radiation therapy, or chemotherapy. It is difficult to imagine in which of the scenes the impact is greater: the delay of imperative treatment or an unwanted treatment for a healthy patient. In both situations, the consequences can be devastating—adverse effects or mutilations in treatment without clinical indications, with possibly fatal consequences, besides medical and legal consequences for the pathologist or laboratory involved in the biopsy process, with serious risks to the credibility and reputation of the pathologist and the laboratory.

The aim of any pathology laboratory must be establishing procedures that optimize quality control, such as additional case reviews and review of their laboratory techniques, to reduce interpretive errors or discrepancies in pathology reports. The quality formation, knowledge, and experience of the pathologist is crucial for diagnostic accuracy and the greater investment of laboratories, greater than higher technologies, must be continuing medical education for these professionals.

The taboo around the diagnostic error in pathology should be broken. It is not possible to discuss the quality controls of laboratories without admitting the possibility of error. Investing in continuing medical education, with emphasis on patient safety, as well as on the training of new pathologists, with a critical view aimed at reducing errors, is an obligatory path in improving the pathology practice.

Advertisement

Conflict of interest

The authors declare that they do not have any conflicts of interest.

References

  1. 1. Richardson W. To Err Is Human: Building a Safer Health System. National Academy Press; 1999
  2. 2. Sirota RL. The Institute of Medicine’s report on medical error: Implications for pathology. Archives of Pathology & Laboratory Medicine. 2000;124(11):1674-1678
  3. 3. Novis DA. Detecting and preventing the occurrence of errors in the practices of laboratory medicine and anatomic pathology: 15 years’ experience with the College of American Pathologists’ Q-PROBES and Q-TRACKS programs. Clinics in Laboratory Medicine. 2004;24(4):965-978
  4. 4. Plebani M. The detection and prevention of errors in laboratory medicine. Annals of Clinical Biochemistry. 2010;47(Pt 2):101-110
  5. 5. Dintzis SM, Stetsenko GY, Sitlani CM, Gronowski AM, Astion ML, Gallagher TH. Communicating pathology and laboratory errors: Anatomic pathologists’ and laboratory medical directors’ attitudes and experiences. American Journal of Clinical Pathology. 2011;135(5):760-765
  6. 6. Meier FA. The landscape of error in surgical pathology. In: Journal of Chemical Information and Modeling. 2015. pp. 3-29
  7. 7. Zarbo RJ, Meier Fa, Raab SS. Error detection in anatomic pathology. Archives of Pathology & Laboratory Medicine. 2005;129(10):1237-1245
  8. 8. Sirota RL. Defining error in anatomic pathology. Archives of Pathology & Laboratory Medicine. 2006;130(5):604-606
  9. 9. Smith ML, Raab SS. Directed peer review in surgical pathology. Advances in Anatomic Pathology. 2012;19(5):331-337
  10. 10. F a M, Varney RC, Zarbo RJ. Study of amended reports to evaluate and improve surgical pathology processes. Advances in Anatomic Pathology. 2011;18(5):406-413
  11. 11. Volmar KE, Idowu MO, Hunt JL, Souers RJ, Meier FA, Nakhleh RE. Surgical pathology report defects a college of american pathologists q-probes study of 73 institutions. Archives of Pathology & Laboratory Medicine. 2014;138(5):602-612
  12. 12. Valenstein PN, Sirota RL. Identification errors in pathology and laboratory medicine. Clinics in Laboratory Medicine. 2004;24(4):979-996
  13. 13. Meier FA, Zarbo RJ, Varney RC, Bonsal M, Schultz DS, Vrbin CM, et al. Amended reports: Development and validation of a taxonomy of defects. American Journal of Clinical Pathology. 2008;130(2):238-246
  14. 14. Morelli P, Porazzi E, Ruspini M, Restelli U, Banfi G. Analysis Of errors in histology by root cause analysis: A pilot study. Journal of Preventive Medicine and Hygiene. 2013;54(2):90-96
  15. 15. Samulski TD, Montone K, LiVolsi V, Patel K, Baloch Z. Patient safety curriculum for anatomic pathology trainees: Recommendations based on institutional experience. Advances in Anatomic Pathology. 2016;23(2):112-117
  16. 16. Tosuner Z, Gucin Z, Kiran T, Buyukpinarbasili N, Turna S, Taskiran O, et al. A six sigma trial for reduction of error rates in pathology laboratory. Turkish Journal of Pathology. 2016:171-177
  17. 17. Layfield LJ, Anderson GM. Specimen labeling errors in surgical pathology: An 18-month experience. American Journal of Clinical Pathology. 2010;134(3):466-470
  18. 18. Bixenstine PJ, Zarbo RJ, Holzmueller CG, Yenokyan G, Robinson R, Hudson DW, et al. Developing and pilot testing practical measures of preanalytic surgical specimen identification defects. American Journal of Medical Quality. 2013;28:308-314
  19. 19. Gras E. Application of microsatellite PCR techniques in the identification of mixed up tissue specimens in surgical pathology. Journal of Clinical Pathology. 2000;53(3):238-240
  20. 20. Burke NG, McCaffrey D, Mackle E. Contamination of histology biopsy specimen—A potential source of error for surgeons: A case report. Cases Journal. 2009;2(1):7619
  21. 21. Weyers W. Confusion—Specimen mix-up in dermatopathology and measures to prevent and detect it. Dermatology Practical & Conceptual. 2014;4(1):27-42
  22. 22. Carpenter J. Risk of Misdiagnosis Due to Tissue Contamination may Be Higher for Certain Specimen Types. DARK DARK Daily Laboratory and Pathology News; 2011
  23. 23. Platt E, Sommer P, McDonald L, Bennett A, Hunt J. Tissue floaters and contaminants in the histology laboratory. Archives of Pathology & Laboratory Medicine. 2009;133(6):973-978
  24. 24. Gephardt GN, Zarbo RJ. Estraneous tissue in surgical pathology. Archives of Pathology & Laboratory Medicine Online. 1996;120(November 1996):1009-1014
  25. 25. Nakhleh RE, Zarbo RJ. Surgical Pathology specimen identification and accessioning: A College of American Pathologists Q-probes study of 1 004 115 cases from 417 institutions. Archives of Pathology & Laboratory Medicine. Mar 1996;120(3):227-233
  26. 26. TDB. An insurer’s perspective on error and loss in pathology. Archives of Pathology & Laboratory Medicine. 2005;129(10):1234-1236
  27. 27. Genta RM. Same specimen, different diagnoses: Suprahistologic elements in observer variability. Advances in Anatomic Pathology. 2014;21(3):188-190
  28. 28. McLendon RE. Errors in surgical neuropathology and the influence of cognitive biases: The psychology of intelligence analysis. Archives of Pathology & Laboratory Medicine. 2006;130(5):613-616
  29. 29. Leong ASY, Braye S, Bhagwandeen B. Diagnostic “errors” in anatomical pathology: Relevance to Australian laboratories. Pathology. 2006;38(6):490-497
  30. 30. Troxel DB. Diagnostic errors in surgical pathology uncovered by a review of malpractice claims. Part I. General considerations. International Journal of Surgical Pathology. 2000;8(2):161-163
  31. 31. Ahmad Z, Idrees R, Uddin N, Ahmed A, Fatima S. Errors in surgical pathology reports: A study from a major center in Pakistan. Asian Pacific Journal of Cancer Prevention. 2016 Jun 1;17(4):1869-1874
  32. 32. Raab SS, Grzybicki DM. Measuring quality in anatomic pathology. Clinics in Laboratory Medicine. 2008;28(2):245-259
  33. 33. Lehr HA, Bosman FT. Communication skills in diagnostic pathology. Virchows Archiv. 2016;468(1):61-67
  34. 34. Patel S, Smith JB, Kurbatova E, Guarner J. Factors that impact turnaround time of surgical pathology specimens in an academic institution. Human Pathology. 2012;43(9):1501-1505
  35. 35. Nakhleh R, Zarbo R. Amended reports in surgical pathology and implications for diagnostic error detection and avoidance: A College of American Pathologists Q-probes study of 1,667,547 accessioned cases in 359 laboratories. Archives of Pathology & Laboratory Medicine Online. 1998;122(4):303-309
  36. 36. PIU. Error disclosure in pathology and laboratory medicine: A review of the literature. AMA. The Journal of Ethics. 2016;18(8):809-816
  37. 37. Nakhleh RE, Nosé V, Colasacco C, Fatheree LA, Lillemoe TJ, McCrory DC, et al. Interpretive diagnostic error reduction in surgical pathology and cytology: Guideline from the college of American pathologists pathology and laboratory quality center and the association of directors of anatomic and surgical pathology. Archives of Pathology & Laboratory Medicine. 2016;140(1):29-40
  38. 38. Vanker N, van Wyk J, Zemlin AE, Erasmus RT. A six sigma approach to the rate and clinical effect of registration errors in a laboratory. Journal of Clinical Pathology. 2010;63:434-437
  39. 39. Ellis DW, Srigley J. Does standardised structured reporting contribute to quality in diagnostic pathology? The importance of evidence-based datasets. Virchows Archiv. 2016;468(1):51-59
  40. 40. Safrin R, Bark C. Surgical pathology signout: Routine review of every case by a second pathologist. The American Journal of Surgical Pathology. 1993;17:1190-1192
  41. 41. Raab SS, Grzybicki DM, Mahood LK, Parwani AV, Kuan SF, Rao UN. Effectiveness of random and focused review in detecting surgical pathology error. American Journal of Clinical Pathology. 2008;130(6):905-912

Written By

Monique Freire Santana and Luiz Carlos de Lima Ferreira

Submitted: 17 September 2017 Reviewed: 05 December 2017 Published: 22 August 2018