Open access

Limit of Detection and Limit of Quantification Determination in Gas Chromatography

Written By

Ernesto Bernal

Submitted: 13 September 2013 Published: 26 February 2014

DOI: 10.5772/57341

From the Edited Volume

Advances in Gas Chromatography

Edited by Xinghua Guo

Chapter metrics overview

19,304 Chapter Downloads

View Full Metrics

1. Introduction

Any year, millions of analyses of any kind are performed around the world, and millions of decisions are made, based on these analyses; have the medicaments, the amount of drug reported in their container?, Can we safely consume this water or these foods?, are these alloys suitable for use in the aircraft construction?, was the driver drunk, when he crashed?, is this sportsman using drugs to enhance his performance?, and if he was punished, there is no doubt, he was using those substances or we are unfair with him; all these questions are answered with the help of a chemical analysis and all have consequences in real life (compensation claims, disease, fines, even prison). Virtually every aspect of society is supported in some way by analytical measurement, consequently, there is a need these analyses would be reliable.

Until the 1970´s, the underlying assumption was that the reports submitted by laboratories accurately described study conduct and precisely reported the study data. Suspicion about this assumption was raised during the review of some studies. Data inconsistencies and evidence of unacceptable laboratory practices came to light [1]. If the result of a test cannot be trusted then it has little value and the test might as well have not been carried out. When a client commissions analytical work from a laboratory, it is assumed that the laboratory has a degree of expertise that the client does not have. The client expects to be able to trust the results reported. Thus, the laboratory and its staff have a clear responsibility to justify the clients trust by providing the right answer to the analytical part of the problem, in other words, results that have demonstrable “fitness for purpose”’[2]. Implicit in this is that the tests carried out are appropriate for the analytical part of the problem that the client wishes solved, and that the final report presents the analytical data in such a way that the client can readily understand it and draw appropriate conclusions. Method validation enables chemists to demonstrate that a method is “fit for purpose” [1]. For an analytical result to be fit for its intended purpose it must be sufficiently reliable that any decision based on it can be taken with confidence. Thus the method performance must be validated and the uncertainty on the result, at a given level of confidence, estimated. Uncertainty should be evaluated and quoted in a way that is widely recognized, internally consistent and easy to interpret. Most of the information required to evaluate uncertainty can be obtained during validation of the method [1].

Since then, several agencies like United States Food and Drug Administration (FDA) [3-5], the International Conference for Harmonization (ICH) [6], the United States Pharmacopeia (USP) [7], the International Standards Organization (ISO/IEC) [8], etc. created working groups to ensure the validity and reliability of the studies. They would eventually publish standards for measurement the performance of laboratories and enforcement policy. Good laboratory practice (GLP) regulations were finally proposed in 1976, being method validation an important part of GLP.

1.1. Method validation

ISO [9] defines validation as the confirmation, via the provision of objective evidence, that the requirements for specifically intended use or application have been met, so method validation is the process of defining an analytical requirement, and confirming that the method under consideration has performance capabilities consistent with what the application requires [2].

Therefore, method validation should be an essential component of the measurements that a laboratory makes to allow it to produce reliable analytical data, in consequence, method validation should be an important part in the practice of all the chemists around the world. Nevertheless, the knowledge of exactly what needs to be done to validate a method seems to be poor amongst analytical chemists. The origin of the problem is the fact that many of the technical terms used in processes for evaluating methods vary in different sectors of analytical measurement, both in terms of their meaning and also the way they are determined [2].

In order to resolve the previous problem, several protocols and guidelines [2, 7, 10-19] on method validation have been prepared. Method validation consists of a series of tests that both proof any assumptions on which the analytical method is based and established and document the performance characteristics of a method, demonstrating whether the method fits for a particular analytical purpose [15]. Typical performance characteristics of analytical methods are applicability, selectivity or specificity, calibration, accuracy and recovery, precision, range, limit of quantification, limit of detection, sensitivity and ruggedness or robustness [2, 7, 10-19], Unfortunately, some of the definitions vary between the different organizations producing confusion among analysts. Some parameters are summarized in Table 1.

It´s not the purpose of this work to define the previous parameters and the procedure to determine them or the strategies to perform a validation, all of them concerned with these issues are invited to consult the following references [2, 7, 10-19].

Parameter Comment
Specificity USP, ICH
Selectivity ISO 17025
Precision
• Repeatability
• Intermediate precision
• Reproducibility
USP, ICH
ICH, ISO 17025
ICH
ICH
Accuracy USP, ICH, ISO 17025
Linearity USP, ICH, ISO 17025
Range USP, ICH
Limit of detection USP, ICH, ISO 17025
Limit of quantification USP, ICH, ISO 17025
Robustness USP; ISO 17025
Ruggedness USP

Table 1.

Parameters for method validation

1.2. Limit of detection

From the previous section, it is clear that despite the efforts to standardize concepts, there is still confusion about some terms in method validation, like selectivity and specificity, ruggedness and reproducibility, accuracy and trueness. Nevertheless, the most troublesome concept of all, in method validation is the limit of detection (LOD). LOD remains an ambiguous quantity on analytic chemistry in general and gas chromatography in particular; LOD’s differing by orders of magnitude are frequently found for very similar chemical measurement process (CMP). Such discrepancies raise questions about the validity of the concept of the LOD.

The limit of detection is the smallest amount or concentration of analyte in the test sample that can be reliably distinguished from zero [20]. Despite the simplicity of the concept, the whole subject of LOD is with problems, translating these into the observed discrepancies in the calculation of the LOD. Some of the problems are [15]:

  • There are several conceptual approaches to the subject, each providing a somewhat different definition of the limit, and consequently, the methodology used to calculate the LOD derived from these definitions, differ between them.

  • LOD is confused with other concepts like sensitivity.

  • Estimates of LOD are subject to quite large random variation.

  • Statistical determinations of LOD assume normality, which is at least questionable at low concentrations.

  • The LOD, which characterizes the whole chemical measurement process (CMP), is mistaken with concepts that characterize only one aspect of the CMP, the detection.

These problems are more prominent in the field of chromatography, where, besides the previous issues, no standard model for the LOD has ever proposed by any recognized organization. Actually, the International Union of Pure and Applied Chemistry (IUPAC) model for LOD determination was chosen for spectrochemical analysis specifically. Thus, chromatographic conditions are usually not taken in consideration to determine the LOD [15, 21].

The main purpose of this paper is to bring some light to these problems. In order to achieve this goal, the different problems behind LOD and limit of quantification (LOQ) are going to be discussed. The different definitions and conceptual approaches to LOD and LOQ, given by different associations [2,7,20, 22], the different models to calculate LOD and LOQ, and the effect of matrix and particularities related to chromatographic techniques on LOD and LOQ calculations are going to be critically reviewed [23-25], aiming at unifying criteria and estimating LOD and LOQ in a more reliable figure of merit in chromatography.

Advertisement

2. Limit of detection and limit of quantification

2.1. Definitions

Since the seminal work of Currie [26], emphasis has been placed on the negative effect it has had, the large number of terms that have been used through the years regarding the detection capabilities of a method (table 2). A wide range of terminologies and multiple mathematical expressions have been used to define the limit of detection concept. These different terms resulted in different ways of calculating the LOD, leading to numerical values that can span over three orders of magnitude, applied to the same measurement process. Some mathematical definitions involved the standard deviation of the blank, some the standard deviation of the net signal, some authors used two sided confidence intervals, while others used one sided intervals, and some authors even used non statistical definitions; what was missing from these authors was a theoretical basis for the concept that led to an operational definition of the term.

To overcome the previous problem, the ISO and the IUPAC developed documents bringing their nomenclature into essential agreement [15, 20, 22]. As the measure of the detection capability of a CMP, the IUPAC recommends the term Minimum Detectable Value (LD) of the appropriate chemical variable or Detection limit and defines it as the smallest amount or concentration of analyte in the test sample that can be reliably distinguished from zero. And as the measure of the quantification capability of the CMP, the IUPAC recommends the term Minimum quantifiable limit (LQ) or Quantification limit, which is the concentration or amount below which the analytical method cannot operate with an acceptable precision. ISO definition puts emphasis in the statistics; ISO [2, 22] defines the minimum detectable net concentration as the true net concentration or amount of the analyte in the material to be analyzed, which will lead with probability (1-β) to the conclusion that the concentration of the analyte in the analyzed material is larger that of the blank matrix.

In addition, several organizations have introduced terms with similar meaning to LOD. The US Environmental Protection agency (EPA) uses the term method detection limit (MDL) as the minimum concentration of an analyte that can be identified, measured and reported with 99% confidence that the analyte concentration is greater than zero.

On the other hand, the ICH defines LOD as the lowest amount of analyte in a sample which can be detected but not necessarily quantitated as an exact value and LOQ of an individual analytical procedure as the lowest amount of analyte in a sample which can be quantitatively determined with suitable precision and accuracy [19, 27].

The American Chemical Society (ACS) committee on environmental improvement states LOD as the lowest concentration of an analyte that the analytical process can reliably detect [27].

The conference for drug evaluation and research (CDER) in its document entitled ‘Validation of chromatographic method’ defines LOD as the lowest concentration of analyte in a sample that can be detected but not necessarily quantitated under the stated experimental conditions [19, 27].

The national association of testing authorities (NATA) defines LOD as the smallest amount or concentration that can be readily distinguished from zero and be positively identified according to predetermined criteria and/or level of confidence, while the lowest concentration of an analyte that can be determined with acceptable precision (repeatability) and accuracy under the stated conditions of the test is the limit of quantification [2, 27].

On the other hand, the AOAC defines limit of detection as the lowest content that can be measured with reasonable statistical certainty and the limit of quantification as the content equal to or greater than the lowest concentration point on the calibration curve [2].

Finally, the USP [7] defines LOD as: the lowest amount of analyte that can be detected but not necessarily quantitated under the stated experimental conditions. Table 2 summarizes these terms, symbols and statistical items reported in the literature [28].

All definitions include terms as reliability, probability, confidence, etc. That implies the use of statistics to calculate them. Some definitions even put inside its text explicitly the degree of reliability and consequently, define it. The others leave the decision to the operator. Some definitions make it clear that they are referring to a CMP and not only to the detection phase of the analysis. Therefore, these terms should not be confused with terms referred only to detection like the Instrumental Detection Limits (IDL) used by EPA. The limit of detection is a parameter set before the measure, and, in other words, is defined a priori; therefore, is not related to the decision whether a measurement detects anything or not. Finally, these terms should not be confused with sensitivity defined as the slope of the calibration curve by IUPAC.

As an example of the wide range of terms used to define detection capabilities of a method used nowadays, it is shown a study performed in 2002 [29], by the American Petroleum Institute (API), to intend to review policies related to analytical detection and quantification limits of ten states of the USA, with particular focus on water quality and wastewater issues in permitting and compliance. Thus, these regulations should follow the EPA recommendations. It was found that every state incorporates detection or quantification terms in its regulations to some extent. Terms referenced are usually defined in the regulations, but not always. The most frequently used terms are detection limit/level, method detection limit (MDL), limit of detection (LOD), and practical quantitation level (PQL). Minimum Level (ML) is the term used by EPA instead of LOQ; it is defined as the concentration at which the entire analytical system must give a recognizable signal and acceptable calibration point. The ML is the concentration in a sample that is equivalent to the concentration of the lowest calibration standard analyzed by a specific analytical procedure, assuming that all of the method-specified sample weights, volumes, and processing steps have been followed. EPA uses other terms like Interim minimum level (IML) which is a term created by the EPA to describe MLs that are based on 3.18 times the MDL, to distinguish them from MLs that have been promulgated. The EPA defines the PQL as: The lowest level that can be reliably achieved within specified limits of precision and accuracy during routine laboratory operating conditions. Another term used by EPA is the alternative minimum level (AML) which can account for interlaboratory variability and sample matrix effects and finally the Interlaboratory Quantification estimate (IQE) a term developed by the American Society for Testing and Materials (ASTM), which is similar to the AML. Table 3 lists detection and quantification terms used by the ten states.

Terms Signal Domain Concentration Domain Reference
Limit of detection LOD LOD IUPAC 1976
LOD ACS, 1980
Critical Level LC XC Oppenheimer, et al, 1983
yC XC Hubaux & Vos, 1970
Critical value LC XC Currie, 1995
Method detection limit MDL EPA
Limit of guarantee of purity xG cG Kaiser, 1966
Limit of identification xI cI Long & Winefordner, 1983
Detection level LD XD Oppenheimer, et al, 1983
Detection limit yD xD Hubaux & Vos, 1970
Minimum detectable value LD XD Currie, 1995
Limit of quantification LOQ ACS, 1980
LOQ Long & Winefordner, 1983
Determination Limit LQ xQ Oppenheimer, et al, 1983
Minimum Quantifiable level LQ x Q Currie, 1995
Minimum Level ML EPA
PQL, IML EPA

Table 2.

Terms and symbols reported in the literature related to method detection capabilities

Therefore, despite the efforts of various agencies to standardize the terms concerning the detection capability of a measurement process, there are still differences between the various regulations.

2.2. Theory

2.2.1. Hypothesis testing approach

In 1968, Currie published the hypothesis testing approach to detection decisions and limits in chemistry [26]. This approach has gradually been accepted as the detection limit theory.

Currie´s achievement was in recognizing that there were two different questions under consideration when measurements were performed on a specimen under test, and these two questions have different answers. The first question, which was “does the measurement result indicate detection or not?”, is answered by performing measurements on the specimen under test then computing an appropriate measure for comparison with a critical decision level. The second question is “what is the lowest analyte content that will reliably indicate detection?” and the answer is defined as the detection limit.

State programs DL IDL IML LOD LOQ MDL ML QL PQL Other§
Alabama
Groundwater soil/hazardous waste x x x
Water quality x x x x
California
Drinking water x x DLR
Groundwater soil/hazardous waste x x x x x
Health/Safety x
Water quality x x x x
Illinois
Drinking water x x x MQL, ADL
Groundwater soil/hazardous waste x x x x EDL, EQL, MQL
Laboratories x
Pesticides x x MQL
Water quality x x x x x x
Lousiana
Air x x x
Groundwater soil/hazardous waste x x x x
Water quality x LOM, MQL**
New Jersey
Drinking water x
Groundwater soil/hazardous waste x x x x x x CRDL
Laboratories x x x
Water quality x x x NJQL
LOD*
Ohio
Air x
Drinking water x x
Groundwater solid/hazardous waste x x x ADL
State programs DL IDL IML LOD LOQ MDL ML QL PQL Other
Laboratories X
Underground storage tanks X X
Water quality X X X X ADL
Oklahoma
Air X ADL, MLD, MQL*
Drinking water MDL*
Groundwater solid/hazardous waste X X X
Water quality X MQL*
Pennsylvania
Air
Drinking water X X
Groundwater solid/hazardous waste X X X EQL
Water quality X X X X
Texas
Air X
Drinking water X X x
Groundwater solid/hazardous waste X X X x MQL, SQL
Water quality X MAL
Washington
Air X
Drinking water X
Groundwater solid/hazardous waste X X X X X
Water quality X X X

Table 3.

Summary of detection and quantification terms used by the states

§ Definitions on section 4


According to Currie, the IUPAC and ISO [20, 30-31], detection limits (minimum detectable amounts) are based on the theory of hypothesis testing and the probabilities of false positives α, and false negatives β. On the other hand, quantification limits are defined in terms of a specified value for the relative standard deviation. It is important to emphasize that both types of limits are CMP performance characteristics, associated with underlying true values of the quantity of interest; they are not associated with any particular outcome or result. The detection decision, on the other hand, is result-specific; it is made by comparing the experimental result with the critical value which is the minimum significantly estimated value of the quantity of interest. In other words it is used to make a posteriori estimate of the detection capabilities of the measurement process, while the limit of detection is used to make a priori estimate.

In order to explain the detection limit theory, let’s assume we have an analytical method with known precision along all its concentration levels and that its results follow a normal distribution, if we test a lot of blanks with the method above, certainly a distribution as in Figure 1 can be obtained.

Figure 1.

Value distribution around zero and the critical level.

The blank values will distribute around zero with a standard deviation σ0. In others words if we measure a blank, we can obtain a result different from zero due to the experimental errors of the measure process. Thus we need to establish a point to differentiate blanks measures from non blank measures. That point is the critical value LC, that point allows us to determine if a signal corresponds to a blank or if the signal is from an analyte, i.e. to make a posterior decision. Nevertheless, in the critical level there is the probability (blue shadow) that a blank can give a signal above the LC. Therefore we can erroneously conclude that there is analyte, when is not. That probability α, is named type I error or a false positive. The value of α is chosen by the analyst in function of the risk that one wants to take of being wrong. The hypothesis testing theory uses the following definition.

Pr (L> LC|L=0) αE1

Where, L is used, as the generic symbol for the quantity of interest. This is replaced by S, when treating net analyte signal and x, when treating analyte concentrations or amount; mathematically, the critical level is given as:

LC= Kασ0E2

Where Kα and α, are linked with the one sided tails of the distribution of the blank corresponding to probability levels, 1-α.

Nevertheless, the critical level LC cannot be used as the limit of detection, because if we measure a series of samples with an amount of analyte equal to the LC, the results will follow like the blanks a normal distribution around the LC value (Figure 2). Half of the results would be above the LC and we conclude that the signal is from an analyte, and half would be below the LC and consequently, we would think the sample is a blank. Therefore, if we set the LC as the limit of detection, we would report erroneously half of the results; then, there is the possibility to report that the analyte is not present in the sample, when it actually is. The probability β, is named type II error or a false negative.

Therefore, if a laboratory cannot accept a 50% of error around the limit of detection, the only alternative to reduce the probability of false negative is to set the limit of detection to a bigger concentration (Figure 3).

Figure 2.

Critical level and limit of detection with probability α. The risk of a false negative is 50%.

Once LC has been defined, a priori limit of detection LD, may be established by specifying LC, the acceptable level β, for the type II error and the standard deviation, σD, which characterizes the probability distribution of the signal when its true value is equal to LD [26]. By the hypothesis testing theory, we obtain the following relation:

Pr (L LC|L=LD) =βE3

Mathematically, the limit of detection is given as:

LD= Kασ0+ KβσDE4

If equation 2 is substituted in equation 4, we obtain:

LD= LC+ KβσDE5

Where Kβ and β, are linked with the one sided tails of the distribution of the limit of detection corresponding to probability levels, 1-β. Finally, the defining relation for the limit of quantification (LQ) is:

LQ= KQσQE6

Where KQ=1/RSDQ, and σQ equals the standard deviation of L when L= LQ.

Figure 3.

Critical level and limit of detection with α=β.

Summarizing, the levels LC, LD and LQ, are determined by the error structure of the measurement process, the risks α and β and the maximum acceptable relative standard deviation for quantitative analysis. LC is used to test an experimental result, whereas LD and LQ refer to the capabilities of the measurement process itself. The relations among the three levels and their significance in analysis are shown in Figure 4.

Figure 4.

The three principal analytical regions [26].

α, β and KQ can be chosen by the analyst according to the detection and quantification needs; of particular interest is when α=β, and σ=constant, in that circumstances Kα= Kβ=K, and σD=σ0

LC=Kσ0E7
LD= LC+ KσD=K(σ0D)E8

And because σ=constant, σ0D=σ, relation 7, can be written as:

LD= 2Kσ=2LCE9

In this particular case the limit of detection is twice the critical level. The values of α and β recommended by IUPAC and ISO regulations are 0.05 each, which gives a value of K= 1.645, and since σ=constant, it can choose either the value of σ0 or σD, because LD is not known, in equation 9 the value of σ0 is used. Under these circumstances (normal distribution, constant standard deviation and parameters α=β=0.05) the following expressions are obtained [20, 26, 30-31].

LC= Kσ0=1.645σ0E10

If σ0 is estimated by s0, based on ν degrees of freedom, K can be replaced by Student’s t, resulting in equation:

LC= t1-αs0E11

For α=0.05 and four degrees of freedom, equation 12 is obtained.

LC= 2.132σ0E12

Following the procedure developed in the previous paragraphs, equivalent equations are obtained for the limit of detection.

LD= LC+KσD=3.29σ0E13

If LC employs an estimate s0, based on ν degrees of freedom, then K can be replaced by δα,β,ν, the non centrality parameter of the non central t distribution. If α=0.05 and four degrees of freedom are used, equation 14 is obtained.

LDα,β,νσ0=2t1-α,νσ0=4.26σ0E14

The ability to quantify is expressed in terms of the signal or analyte that will produce estimates having a specified relative standard deviation (RSD), commonly 10%. That is

LQ=KQσQ=10σ0              E15

Where, LQ is the limit of quantification, σQ, the standard deviation at the limit of quantification and KQ is the multiplier whose reciprocal equals the selected RSD; the IUPAC default value is 10. It’s possible to transform all those expressions from the signal domain to concentration domain, and vice versa through the slope of the calibration curve.

The above relations represent the simplest possible case, based on restrictive assumptions. Actually, some of them are questionable as the assumption of normality in the blank measures and that σ is constant along the region of the critical level and limit of detection. They must not be taken as the defining relations for detection and quantification capabilities; being the defining relations, equation one, three and six for the critical level, the limit of detection and the limit of quantification respectively. Finally, for chemical measurement at least, the Fundamental contributing factor to the detection and quantification performance characteristics is the variability of the blank.

Currie’s hypothesis testing schema, in spite of being theoretically solid, is very broad in scope, being independent of noise, the methodology to perform the measurements, conditions, etc. In fact his schema does not even have a connection to the calibration curve methodology or even the substance that would be measured.

A particular problem for the calculation of the limit of detection within the field of gas chromatography is the calculation of the deviation of the blank. It has been suggested to use the measurement of 20 blanks and calculate its standard deviation. Other authors suggest measuring the noise at the baseline of one chromatogram in a region near the analyte peak [21]. Nevertheless, questions arise about the sets of the integration parameters, the region of the baseline which should be used to calculate the blank variability and the presence of interfering substances. This makes the determination of s0 subjective and highly variable and has a major drawback in using the IUPAC definition in dynamic systems such as chromatography [23]. In order to introduce the calibration curve in the limit of calculation, other approaches have been developed to calculate the detection capabilities of a CMP.

2.2.2. Hubaux- Vos approach

In 1970, Hubaux and Vos suggested how Currie’s schema could be implemented with calibration curve methodology based on CMP, with homoscedastic (σ=contant), Gaussian noise, ordinary least square processing of the calibration curve data and ordinary least square prediction intervals [32]. Since then, Hubaux and Vos’ treatment has generally been assumed to be fundamentally correct.

Hubaux and Vos made a series of assumptions to develop their approach, It was assumed that the standards of the calibration curve are independent, that the deviation is constant through the calibration curve, the contents of the standards are accurately known and above all, the signals of all the points in the calibration curve have a Gaussian distribution (Figure 5).

Then Hubaux and Vos drew two confident limits on either sides of the regression line with a priori level of confidence (Figure 5), 1−α−β. The regression line and its two confident limits can be used to predict with a 1−α−β probability, likely values for signals. The confidence band can be used in reverse, for a measured signal y of a sample of unknown content, it is possible to predict the range of its content (xmax-xmin) (Figure 5).

To our subject, a signal equal to yC is of interest (Figure 5), where the lower limit of content is zero. Signals equal or lower than yC have a probability bigger than α due to a blank, and hence cannot be distinguished from a blank signal. yC is the lowest measurable signal and therefore corresponds to the critical level LC of Currie. More exactly yC is an estimate of LC. Because this limit concerns signals, it is used to posteriori decisions.

If we trace a line from yC to the lower confident limit and then to the x axis (Figure 5), the value xD can be obtained, which is the lowest content it can be distinguished from zero. This value is inherent to the CMP and can be used as a priori limit, thus is equivalent to the limit of detection LD of Currie. It is important to clarify that the regression line and its confident limits are estimates of the real values. Consequently, the values of yC and xD are estimates too [32].

Figure 5.

The linear calibration line, with its upper and lower confidence limits. yc is the decision limit and xD the detection limit [33].

One serious problem with the Hubaux-Vos approach is the non-constant widths of the prediction interval which contradicts the assumption of homoscedasticity; another problem is, because yC and xD are estimates, this method requires the generation of multiple calibration curves to calculate the mean of yC and xD.

2.2.3. Propagation of errors approach

In the hypothesis testing approach, the value of the limit of detection LD depends only on the variability of the blanks σ0. The propagation of errors approach considers the standard deviation of the concentration sX. This value is calculated by including the standard deviations of the blank s0, slope sm, and intercept si, in the equation [25]. The contribution of the variability of slope, blank and intercept to the variability of x is expressed by the formula:

s=[so2+si2+(1ym)2sm2]m1/2E16

The standard deviation of the concentration is equal to sD, and because it was assumed that, the standard deviation is constant along the region of interest s0=sD=s, it can substitute s0 in any of Currie relations. If we substitute equation 16 in equation 13, and we assume the blank´s signal is set to zero the following relation is obtained.

LD=k{s02+si2+[(im)2sm2]}1/2mE17

Where K is a constant related to the degree of error, the analysts assume. The mathematical expressions for s0, si and sm, can be found in [25] and publications specialized in statistics.

Experimentally, it has been found that the IUPAC approach, based exclusively on the blank variability, in most cases, gives lower values of LD than the propagation of error approach, which, besides the errors of the blank, takes into account errors in analyte measurement (slope and intersect). Consequently, the propagation of errors approach gives More realistic values of LD and consistent with the reliability of the blank measures and the signal measures of the standards. In the literature, the propagation of errors is preferred in many chemistry fields [25].

In order to calculate the limit of detection with the propagation of errors approach, it is necessary to make a minimum of five calibration curves to be able to measure si and sm properly, All of these calibration curves would have to be prepared by fortifying control samples with the analyte of interest at concentrations around an estimated limit of detection. This would make the procedure cumbersome for dynamic systems such as chromatography [23].

2.2.4. Root mean square error approach

In this approach, the root mean square error (RMSE) is used instead of the standard deviation of the blank σ0 in equations 7, 9 and 15, corresponding to LC, LD and LQ, respectively. In order to calculate the LOD, it is necessary to generate a calibration curve, from which the values of the slope (m) and intersect (i) are obtained. From these values and the equation of the calibration curve a predicted response is calculated (yp), and then the error associated with each measurement:

[yp-y]E18

Then the sum of the square of the errors is calculated for all the points of the calibration curve, and finally the RMSE.

RMSE=[i=1n(YipY¯)2n2]1/2E19

Since the RMSE is calculated from a calibration curve, this approach uses both the variability of the blank and of the measurements. For dynamic systems, such as chromatography with autointegration systems, RMSE is easier to measure and more reliable than σ0 [23].

2.2.5. The t99sLLMV approach

This method to calculate the MDL analyzed seven fortified samples with amounts of analyte close to an estimated limit of quantitation (ELOQ) or the lowest limit of method validation (LLMV) and its standard deviation sELOQ/LLMV was calculated. This value in a way is a substitute of σD in Currie’s definitions. Because this approach was developed by EPA, it is used to determine the MDL with the relation:

MDL= t99n-1sELOQ/LLMVE20

Where, t99n-1 is the one sided Student’s t for N-1 observations (six degrees of freedom in our case) at the 99% confidence level. In this case t99n=1 equals to 3.143.

However, it is extremely important that the ELOQ be accurately determined, because the fortification concentration greatly influenced the final value of MDL and MQL determined by this approach. EPA recommends that if the calculated MQL is significantly different from the ELOQ, the procedure has to be repeated with the calculated MQL as the new ELOQ, and MDL and MQL should be recalculated. This should be done until the calculated values are in the range of the estimated values. This approach is considered a fairly accurate way to determining method detection limits [23]. This approach is similar in some aspects to the so-called empirical method [24, 33], where increasingly lower concentrations of the analyte are analyzed until the measurement do not satisfy a predetermined criteria.

2.2.6. Baseline noise approach

The IUPAC and propagation of errors approaches were developed for spectroscopic analysis. Nearly all concepts used in this approach have an equivalent in chromatography, except the interpretation and measurement of S0, It has been proposed that the chromatographic baseline is analogous to a blank and S0 must represent a measure of the baseline fluctuations [21, 23].

Therefore, in order to calculate the LOD and LOQ it is necessary to measure the peak-to-peak noise (Np-p) of the baseline around the analyte retention time. Np-p can be related to the standard deviation of the blank through the relation [21]:

s0=Npp5E21

In spite of being the simplest path to determine the detection capabilities of a chromatographic method, this approach is not recommended because it is very dependent on analyst interpretation since, there is no agreement on where to measure the noise and the extension of baseline that has to be measured. Therefore, the obtained results show great variability between laboratories and even between analysts and consequently, they are hard to compare.

Advertisement

3. Conclusions

The limit of detection is an important figure of merit in analytical chemistry. It is of the utmost importance, in the development of methods to test the detection capabilities of a method and although it is not necessary to calculate it in the process of validation of all methods. It finds applications in areas such as environmental analysis, food analysis and areas under great scrutiny such as forensic science, etc.

Although the detection limit concept is deceptively simple, little is understood by the chemistry community. This caused the proliferation of terms relating to the detection capabilities of a method with different approaches for its determination and impeded efforts to harmonize the methodology.

Although various authors and agencies [20-28, 30-32] have published their own definitions of the detection limit of analytical method, nowadays, the limits of detection and quantification are commonly accepted as that in the hypothesis testing detection limit theory [20, 26, 28, 30-32].

This theory states:

  • Limits of detection are actual true values, which can be determined.

  • Both Limits are chemical measurement process (CMP) performance characteristics, and therefore, involve all the phases of the analysis. Consequently should not be confused with terms referred exclusively to the detection capabilities of the instrument like IDL.

  • Detection limits are not associated with any particular outcome, they are a priori limit

  • The existence of both type I errors (false positives) and type II errors (false negatives).

  • Detection decision is based on the other hand on a posteriori limit, the critical value.

  • Detection limits should not be confused with sensibility, which is the slope of the calibration curve.

In developing the limit of detection theory, Currie made a series of assumptions. First, the measurement distribution of the blanks follow a normal distribution, which is questionable at low concentrations, and secondly, in order to obtain simplified relations, the standard deviation is constant over the range of concentrations studied (homoscedasticity). In this specific circumstances the detection capabilities of a method depends exclusively on blank variability [20, 26, 30-31].

Even Hubaux-Vos prediction bands with their non-uniform width proves that the assumption of homoscedasticity is false. Currie and other authors [26, 28, 30-31] have addressed this problem, but stated that if the standard deviation increases too sharply, limits of detection may not be attainable for the CMP in question.

Although in Currie´s simplified relations, limits of detection exclusively depend on blank variability; other sources of error can be introduced in making the transition from the signal domain to the concentration domain through the uncertainty of the slope and intercept of the calibration curve. To take this into account several approaches have been developed like the propagation error approach, Hubaix –Vos, RMSE, etc [23-26, 32]. Actually, the IUPAC approach which does not account measurement variability, usually gives artificially low values of limit of detection, while methods which account slope and intercept uncertainties, like the propagation error method and Hubaix-Vos method give more realistic estimates, consistent with the reliability of the blank measure and the signal measure of the standards.

In other words, Currie’s simplified equations can only be valid, when all their assumptions are met (normal distribution, homoscedasticity, main source of error is the blank). To achieve this, a good knowledge of the blanks is needed to generate confidence in the nature of the blank distribution and some precision in the blank RSD is necessary; therefore an adequate number of full scale true blanks must be assayed through all the CMP.

Most of the assumptions of the IUPAC method are fulfilled in spectrophotometric analysis, for which it was developed, and where it has been used successfully. However, in the case of gas chromatography, where dynamic measures are carried out, and no practical rules are defined to measure the blank standard deviation; the error associated to the intersect of the calibration curve is not always negligible, and the presence of interferences is important. It is better to use a method that takes into account these sources of error. Therefore, for chromatographic techniques it is not recommended the IUPAC approach for the calculation of the detection capabilities of the method. Instead, the propagation of error, Hubaix- Vos, RMSE and t99sLLMV approaches, which take into account the errors of the measurements of the analyte through a calibration curve, are recommended. A brief comparison of the different approaches for the determination of the detection capabilities of a CMP can be found in Table 4.

Method Easy to apply Variability of calibration curve Considers method efficiency and matrix effect Variability between laboratories and analysts
Ks0 No No Yes Moderate
Np-p Yes No Yes High
Propagation of errors No Yes No Low
Hubaux - Vos No Yes No Low
RMSE Yes Yes No Low
t99sLLMV Yes Yes Yes Low

Table 4.

Comparison of approaches for calculating detection and quantification limits for analytical method

Since several methods can be used, and could be a difference in the limit of detection calculated by them, it is important that when reporting values of limits of detection, the method used to define these values should be clearly identified in order to have meaningful comparisons.

In order to properly determine the limit of detection and limit of quantification of a method, it is necessary to know the theory behind them, to recognize the scope and limitations of any approach, and be able to choose the method that better suits our CMP. The intention of this chapter is to review the fundamentals of detection limits determination for the purpose of achieving a better understanding of this key element in trace analysis, in particular and analytical chemistry in general; and to achieve a more scientific and less arbitrary use of this figure of merit with a view to their harmonization and avoid the confusion about them, which still prevails in the chemical community.

Advertisement

Nomenclature

α Type I error, false positive
ACS American Chemical Society
ADL Analytical Detection Level
AML Alternative Minimum Level
API American Petroleum Institute
ASTM American Society for Testing and Materials
β Type II error, false negative
CDER Conference for Drug Evaluation and Research
CMP Chemical Measurement Process
CRQL Contract Required Quantitation Limit
DL Detection limit/level
DLR Detection Limit for Purpose of Reporting
δα,β,ν the non centrality parameter of the non central t distribution
EDL Estimated Detection Limit
ELOQ Estimated limit of quantification
EPA (United States) Environmental Protection Agency
EQL Estimated Quantitation Limit
FDA United States Food and Drug Administration
GLP Good Laboratory Practice
i Intersect of the calibration curve
ICH International Conference for Harmonization
IDL Instrumental Detection Limit
IML Interim Minimum level
IQE Interlaboratory Quantication Estimate
ISO/IEC International Standards Organization
IUPAC International Union of Pure and Applied Chemistry
Kα the one sided tails of the distribution of the blank corresponding to probability levels, 1-.
Kβ the one sided tails of the distribution of L, when L=LD corresponding to probability levels, 1-.
L Quantity of interest
LC Critical value
LD Minimum detectable value
LQ Minimum Quantification level/value
LLMV Lower limit of method validation
LOD Limit of Detection
LOD* Limit of detectability
LOM Limit of Measurement
LOQ Limit of Quantification
m Sensitivity, slope of calibration curve
MAL Minimum Analytical Level
MDL Method Detection Limit
MDL* Minimum Detection Limit
ML Minimum Limit
MLD Minimum Level of Detectability
MQL Method quantitative limit
MQL* Minimum quantification level
MQL** Minimum analytical quantification level
Np-p Peak to peak noise
NJQL New Jersey Quantitation Level
PQL Practical Quantification Level
Pr Probability
QL Quantification Level
RMSE Root mean square error
RSDQ Relative standard deviation when L=LQ
S Analyte signal
σ0 Standard deviation of the blank
σD Standard deviation when L=LD
σQ Standard deviation when L=LQ
sELOQ/LLMV Standard deviation when L=ELOQ or L=LLMV
SQL Sample Quantitation Limit
t1-α,ν Student’s t, with a probability and  degrees of freedom
t99n-1 One sided Student’s t, for N-1 observations at the 99% confidence level
USP United States Pharmacopeia
x Amount or concentration
xD The lowest content it can be distinguished from zero
yC The lowest measurable signal
yp Predicted response

Advertisement

Acknowledgments

I would like to acknowledge to the Chemical Engineer Sarai Cortes for the exchange of ideas and knowledge on this subject and the joint work.

References

  1. 1. Hirsch AF. Good laboratory practice regulations. New York: Marcel Dekker; 1989.
  2. 2. EURACHEM Guide: The fitness for purpose of analytical methods. A Laboratory Guide to method validation and related topics. Teddington: LGC; 1998.
  3. 3. Title 21of the US Code of federal regulations: 21 CFR 211, Current good manufacturing practice for finished pharmaceuticals. Washington: U.S. Government Printing Office; 2012.
  4. 4. Title 21of the US Code of federal regulations: 21 CFR 58, Good laboratory practice for nonclinical laboratory studies. Washington: U.S. Government Printing Office; 2012.
  5. 5. Title 21of the US Code of federal regulations: 21 CFR 320, Bioavailability and bioequivalence requirements. Washington: U.S. Government Printing Office; 2012.
  6. 6. ICH Q7A: Good Manufacturing practice guide for active pharmaceutical ingredients. Rockville: ICH; 2001.
  7. 7. USP 32- NF27. General chapter 1225, Validation of compendial methods. Rockville: USP; 2009.
  8. 8. ISO/IEC 17025, General requirements for the competence of testing and calibration laboratories. 2nd Edition, Switzerland: ISO/IEC; 2005.
  9. 9. ISO 9000: Quality management systems – Fundamentals and vocabulary. Geneva: ISO; 2005
  10. 10. ISO, Guide to the expression of uncertainty in measurement. Geneva: ISO; 1995
  11. 11. Youden WJ, Steiner EH. Statistical manual of the AOAC. Gaithersburg: AOAC International; 1975
  12. 12. Validation of analytical procedures: text and methodology Q2(R1). Rockville: ICH; 2005
  13. 13. Validation of analytical methods for food control. Report of a joint FAO/IAEA expert consultation. Rome: FAO; 1998
  14. 14. Thompson M, Ellison RL, Wood R. Harmonized guidelines for single laboratory validation of methods of analysis. Pure Appl. Chem. 2002; 74(5), 835–855.
  15. 15. U.S. FDA – Guidance for Industry (draft): Analytical Procedures and Methods validation: chemistry, Manufacturing, and Controls and Documentation, Rockville: FDA; 2000
  16. 16. Lopez Garcia P, Buffoni E, Pereira Gomez F, Vilchez Quero JL. Analytical method validation. Wide spectra of quality control. In Akyar I. (ed.), In Tech. Rijeka: In Tech; 2011
  17. 17. Huber L. Validation of analytical method. A primer. Germany: Agilent; 2010
  18. 18. Reviewer Guidance, Validation of Chromatographic Methods. Rockville: Center for Drug Evaluation and Research; 1994
  19. 19. Validation and Peer Review of US EPA Chemical Methods of Analysis, EPA; 2005
  20. 20. Currie LA. Nomenclature and evaluation of analytical methods, including quantification and detection capabilities. Pure Appl. Chem. 1995; 67(10) 1699-1723.
  21. 21. Foley JP, Dorsey JG. Clarification of the limit of detection in chromatography. Chromatographia. 1984; 18 (9) 503-511.
  22. 22. International Organization for Standardization. Capability of detection. Part 1: Terms and definitions. ISO-11843-1.Geneve: ISO; 1997
  23. 23. Corley J. Best practices in establishing detection and quantification limits for pesticide residues in foods, in Lee P. (ed.) Handbook of residue analytical methods for agrochemicals, West Sussex. John Wiley and sons; 2003. p59-75.
  24. 24. Armbruster DA, Tillman MD, Hubbs LA. Limit of detection (LOD)/limit of quantification (LOQ): comparison of the empirical and the statistical methods exemplified with GC-MS assays of abused drugs. Clin. Chem. 1994; 40(7) 1233-1238.
  25. 25. Long GL, Winefordner JD. Limit of detection, a closer look at the IUPAC definition. Anal.Chem. 1983; 55(7), 712A-724A.
  26. 26. Currie LA. Limits for qualitative detection and quantitative determination. Anal.Chem. 1968; 40(3) 586-593.
  27. 27. Chang KH. Limit of detection and its establishment in analytical chemistry. Health Env. J. 2011; 2(1) 38-43.
  28. 28. Lavagnini I, Magno F. A statistical overwiew on univariate calibration, inverse regression and detection limits. Application to GC/MS technique. Mass Spect. Rev. 2007; 26, 1-18.
  29. 29. API, Analytical detection and quantification limits: survey of state and federal approaches. Pub. No. 4721, Washington: American Petroleum Institute; 2002
  30. 30. Currie LA. Detection and quantification limits: origins and historical overview. Anal. Chim. Acta 1999; 391(2) 127-134.
  31. 31. Currie LA. Detection: international update, and some emerging di-lemmas involving calibration, the blank and multiple detection decisions. Chem. Intell. Lab. Sys. 1997; 37, 151-181.
  32. 32. Hubaux, A. Vos, G. Decision and detection limits for linear calibrations curves. Anal. Chem. 1970; 42(8) 849-855.
  33. 33. Needleman SB, Romberg RW. Limits of linearity and detection for some drugs of abuse. J. Anal. Tox. 1990, 14(1) 34-38.

Written By

Ernesto Bernal

Submitted: 13 September 2013 Published: 26 February 2014