Limit of Detection and Limit of Quantification Determination in Gas Chromatography

Any year, millions of analyses of any kind are performed around the world, and millions of decisions are made, based on these analyses; have the medicaments, the amount of drug reported in their container?, Can we safely consume this water or these foods?, are these alloys suitable for use in the aircraft construction?, was the driver drunk, when he crashed?, is this sportsman using drugs to enhance his performance?, and if he was punished, there is no doubt, he was using those substances or we are unfair with him; all these questions are answered with the help of a chemical analysis and all have consequences in real life (compensation claims, disease, fines, even prison). Virtually every aspect of society is supported in some way by analytical measurement, consequently, there is a need these analyses would be reliable.


Introduction
Any year, millions of analyses of any kind are performed around the world, and millions of decisions are made, based on these analyses; have the medicaments, the amount of drug reported in their container?, Can we safely consume this water or these foods?, are these alloys suitable for use in the aircraft construction?, was the driver drunk, when he crashed?, is this sportsman using drugs to enhance his performance?, and if he was punished, there is no doubt, he was using those substances or we are unfair with him; all these questions are answered with the help of a chemical analysis and all have consequences in real life (compensation claims, disease, fines, even prison). Virtually every aspect of society is supported in some way by analytical measurement, consequently, there is a need these analyses would be reliable.
Until the 1970´s, the underlying assumption was that the reports submitted by laboratories accurately described study conduct and precisely reported the study data. Suspicion about this assumption was raised during the review of some studies. Data inconsistencies and evidence of unacceptable laboratory practices came to light [1]. If the result of a test cannot be trusted then it has little value and the test might as well have not been carried out. When a client commissions analytical work from a laboratory, it is assumed that the laboratory has a degree of expertise that the client does not have. The client expects to be able to trust the results reported. Thus, the laboratory and its staff have a clear responsibility to justify the clients trust by providing the right answer to the analytical part of the problem, in other words, results that have demonstrable "fitness for purpose"' [2]. Implicit in this is that the tests carried out are appropriate for the analytical part of the problem that the client wishes solved, and that the final report presents the analytical data in such a way that the client can readily understand it and draw appropriate conclusions. Method validation enables chemists to demonstrate that a method is "fit for purpose" [1]. For an analytical result to be fit for its intended purpose it must be sufficiently reliable that any decision based on it can be taken with confidence. Thus the method performance must be validated and the uncertainty on the result, at a given level of confidence, estimated. Uncertainty should be evaluated and quoted in a way that is widely recognized, internally consistent and easy to interpret. Most of the information required to evaluate uncertainty can be obtained during validation of the method [1].
Since then, several agencies like United States Food and Drug Administration (FDA) [3][4][5], the International Conference for Harmonization (ICH) [6], the United States Pharmacopeia (USP) [7], the International Standards Organization (ISO/IEC) [8], etc. created working groups to ensure the validity and reliability of the studies. They would eventually publish standards for measurement the performance of laboratories and enforcement policy. Good laboratory practice (GLP) regulations were finally proposed in 1976, being method validation an important part of GLP.

Method validation
ISO [9] defines validation as the confirmation, via the provision of objective evidence, that the requirements for specifically intended use or application have been met, so method validation is the process of defining an analytical requirement, and confirming that the method under consideration has performance capabilities consistent with what the application requires [2]. Therefore, method validation should be an essential component of the measurements that a laboratory makes to allow it to produce reliable analytical data, in consequence, method validation should be an important part in the practice of all the chemists around the world. Nevertheless, the knowledge of exactly what needs to be done to validate a method seems to be poor amongst analytical chemists. The origin of the problem is the fact that many of the technical terms used in processes for evaluating methods vary in different sectors of analytical measurement, both in terms of their meaning and also the way they are determined [2].
It´s not the purpose of this work to define the previous parameters and the procedure to determine them or the strategies to perform a validation, all of them concerned with these issues are invited to consult the following references [2,7,[10][11][12][13][14][15][16][17][18][19].

Limit of detection
From the previous section, it is clear that despite the efforts to standardize concepts, there is still confusion about some terms in method validation, like selectivity and specificity, ruggedness and reproducibility, accuracy and trueness. Nevertheless, the most troublesome concept of all, in method validation is the limit of detection (LOD). LOD remains an ambiguous quantity on analytic chemistry in general and gas chromatography in particular; LOD's differing by orders of magnitude are frequently found for very similar chemical measurement process (CMP). Such discrepancies raise questions about the validity of the concept of the LOD.
The limit of detection is the smallest amount or concentration of analyte in the test sample that can be reliably distinguished from zero [20]. Despite the simplicity of the concept, the whole subject of LOD is with problems, translating these into the observed discrepancies in the calculation of the LOD. Some of the problems are [15]: • There are several conceptual approaches to the subject, each providing a somewhat different definition of the limit, and consequently, the methodology used to calculate the LOD derived from these definitions, differ between them.
• LOD is confused with other concepts like sensitivity.
• Estimates of LOD are subject to quite large random variation.
• Statistical determinations of LOD assume normality, which is at least questionable at low concentrations.
• The LOD, which characterizes the whole chemical measurement process (CMP), is mistaken with concepts that characterize only one aspect of the CMP, the detection.
These problems are more prominent in the field of chromatography, where, besides the previous issues, no standard model for the LOD has ever proposed by any recognized organization. Actually, the International Union of Pure and Applied Chemistry (IUPAC) model for LOD determination was chosen for spectrochemical analysis specifically. Thus, chromatographic conditions are usually not taken in consideration to determine the LOD [15,21].
The main purpose of this paper is to bring some light to these problems. In order to achieve this goal, the different problems behind LOD and limit of quantification (LOQ) are going to be discussed. The different definitions and conceptual approaches to LOD and LOQ, given by different associations [2,7,20,22], the different models to calculate LOD and LOQ, and the effect of matrix and particularities related to chromatographic techniques on LOD and LOQ calculations are going to be critically reviewed [23][24][25], aiming at unifying criteria and estimating LOD and LOQ in a more reliable figure of merit in chromatography.

Definitions
Since the seminal work of Currie [26], emphasis has been placed on the negative effect it has had, the large number of terms that have been used through the years regarding the detection capabilities of a method (table 2). A wide range of terminologies and multiple mathematical expressions have been used to define the limit of detection concept. These different terms resulted in different ways of calculating the LOD, leading to numerical values that can span over three orders of magnitude, applied to the same measurement process. Some mathematical definitions involved the standard deviation of the blank, some the standard deviation of the net signal, some authors used two sided confidence intervals, while others used one sided intervals, and some authors even used non statistical definitions; what was missing from these authors was a theoretical basis for the concept that led to an operational definition of the term.
To overcome the previous problem, the ISO and the IUPAC developed documents bringing their nomenclature into essential agreement [15,20,22]. As the measure of the detection capability of a CMP, the IUPAC recommends the term Minimum Detectable Value (L D ) of the appropriate chemical variable or Detection limit and defines it as the smallest amount or concentration of analyte in the test sample that can be reliably distinguished from zero. And as the measure of the quantification capability of the CMP, the IUPAC recommends the term Minimum quantifiable limit (L Q ) or Quantification limit, which is the concentration or amount below which the analytical method cannot operate with an acceptable precision. ISO definition puts emphasis in the statistics; ISO [2,22] defines the minimum detectable net concentration as the true net concentration or amount of the analyte in the material to be analyzed, which will lead with probability (1-β) to the conclusion that the concentration of the analyte in the analyzed material is larger that of the blank matrix.
In addition, several organizations have introduced terms with similar meaning to LOD. The US Environmental Protection agency (EPA) uses the term method detection limit (MDL) as the minimum concentration of an analyte that can be identified, measured and reported with 99% confidence that the analyte concentration is greater than zero.
On the other hand, the ICH defines LOD as the lowest amount of analyte in a sample which can be detected but not necessarily quantitated as an exact value and LOQ of an individual analytical procedure as the lowest amount of analyte in a sample which can be quantitatively determined with suitable precision and accuracy [19,27].
The American Chemical Society (ACS) committee on environmental improvement states LOD as the lowest concentration of an analyte that the analytical process can reliably detect [27].
The conference for drug evaluation and research (CDER) in its document entitled 'Validation of chromatographic method' defines LOD as the lowest concentration of analyte in a sample that can be detected but not necessarily quantitated under the stated experimental conditions [19,27].
The national association of testing authorities (NATA) defines LOD as the smallest amount or concentration that can be readily distinguished from zero and be positively identified according to predetermined criteria and/or level of confidence, while the lowest concentration of an analyte that can be determined with acceptable precision (repeatability) and accuracy under the stated conditions of the test is the limit of quantification [2,27].
On the other hand, the AOAC defines limit of detection as the lowest content that can be measured with reasonable statistical certainty and the limit of quantification as the content equal to or greater than the lowest concentration point on the calibration curve [2].
Finally, the USP [7] defines LOD as: the lowest amount of analyte that can be detected but not necessarily quantitated under the stated experimental conditions. Table 2 summarizes these terms, symbols and statistical items reported in the literature [28].
All definitions include terms as reliability, probability, confidence, etc. That implies the use of statistics to calculate them. Some definitions even put inside its text explicitly the degree of reliability and consequently, define it. The others leave the decision to the operator. Some definitions make it clear that they are referring to a CMP and not only to the detection phase of the analysis. Therefore, these terms should not be confused with terms referred only to detection like the Instrumental Detection Limits (IDL) used by EPA. The limit of detection is a parameter set before the measure, and, in other words, is defined a priori; therefore, is not related to the decision whether a measurement detects anything or not. Finally, these terms should not be confused with sensitivity defined as the slope of the calibration curve by IUPAC.
As an example of the wide range of terms used to define detection capabilities of a method used nowadays, it is shown a study performed in 2002 [29], by the American Petroleum Institute (API), to intend to review policies related to analytical detection and quantification limits of ten states of the USA, with particular focus on water quality and wastewater issues in permitting and compliance. Thus, these regulations should follow the EPA recommendations. It was found that every state incorporates detection or quantification terms in its regulations to some extent. Terms referenced are usually defined in the regulations, but not always. The most frequently used terms are detection limit/level, method detection limit (MDL), limit of detection (LOD), and practical quantitation level (PQL). Minimum Level (ML) is the term used by EPA instead of LOQ; it is defined as the concentration at which the entire analytical system must give a recognizable signal and acceptable calibration point. The ML is the concentration in a sample that is equivalent to the concentration of the lowest calibration standard analyzed by a specific analytical procedure, assuming that all of the method-specified sample weights, volumes, and processing steps have been followed. EPA uses other terms like Interim minimum level (IML) which is a term created by the EPA to describe MLs that are based on 3.18 times the MDL, to distinguish them from MLs that have been promulgated. The EPA defines the PQL as: The lowest level that can be reliably achieved within specified limits of precision and accuracy during routine laboratory operating conditions. Another term used by EPA is the alternative minimum level (AML) which can account for interlaboratory variability and sample matrix effects and finally the Interlaboratory Quantification estimate (IQE) a term developed by the American Society for Testing and Materials (ASTM), which is similar to the AML. Table 3 lists detection and quantification terms used by the ten states. Therefore, despite the efforts of various agencies to standardize the terms concerning the detection capability of a measurement process, there are still differences between the various regulations.

Hypothesis testing approach
In 1968, Currie published the hypothesis testing approach to detection decisions and limits in chemistry [26]. This approach has gradually been accepted as the detection limit theory.
Currie´s achievement was in recognizing that there were two different questions under consideration when measurements were performed on a specimen under test, and these two questions have different answers. The first question, which was "does the measurement result indicate detection or not?", is answered by performing measurements on the specimen under test then computing an appropriate measure for comparison with a critical decision level. The second question is "what is the lowest analyte content that will reliably indicate detection?" and the answer is defined as the detection limit.   [20,[30][31], detection limits (minimum detectable amounts) are based on the theory of hypothesis testing and the probabilities of false positives α, and false negatives β. On the other hand, quantification limits are defined in terms of a specified value for the relative standard deviation. It is important to emphasize that both types of limits are CMP performance characteristics, associated with underlying true values of the quantity of interest; they are not associated with any particular outcome or result. The detection decision, on the other hand, is result-specific; it is made by comparing the experimental result with the critical value which is the minimum significantly estimated value of the quantity of interest. In other words it is used to make a posteriori estimate of the detection capabilities of the measurement process, while the limit of detection is used to make a priori estimate.
In order to explain the detection limit theory, let's assume we have an analytical method with known precision along all its concentration levels and that its results follow a normal distribution, if we test a lot of blanks with the method above, certainly a distribution as in Figure  1 can be obtained.
The blank values will distribute around zero with a standard deviation σ 0 . In others words if we measure a blank, we can obtain a result different from zero due to the experimental errors of the measure process. Thus we need to establish a point to differentiate blanks measures from non blank measures. That point is the critical value L C , that point allows us to determine if a signal corresponds to a blank or if the signal is from an analyte, i.e. to make a posterior decision. Nevertheless, in the critical level there is the probability (blue shadow) that a blank can give a signal above the L C . Therefore we can erroneously conclude that there is analyte, when is not. That probability α, is named type I error or a false positive. The value of α is chosen by the analyst in function of the risk that one wants to take of being wrong. The hypothesis testing theory uses the following definition.
( ) Where, L is used, as the generic symbol for the quantity of interest. This is replaced by S, when treating net analyte signal and x, when treating analyte concentrations or amount; mathematically, the critical level is given as: Where K α and α, are linked with the one sided tails of the distribution of the blank corresponding to probability levels, 1-α.
Nevertheless, the critical level L C cannot be used as the limit of detection, because if we measure a series of samples with an amount of analyte equal to the L C, the results will follow like the blanks a normal distribution around the L C value (Figure 2). Half of the results would be above the L C and we conclude that the signal is from an analyte, and half would be below the L C and consequently, we would think the sample is a blank. Therefore, if we set the L C as the limit of detection, we would report erroneously half of the results; then, there is the possibility to report Advances in Gas Chromatography that the analyte is not present in the sample, when it actually is. The probability β, is named type II error or a false negative.
Therefore, if a laboratory cannot accept a 50% of error around the limit of detection, the only alternative to reduce the probability of false negative is to set the limit of detection to a bigger concentration (Figure 3). Once L C has been defined, a priori limit of detection L D , may be established by specifying L C , the acceptable level β, for the type II error and the standard deviation, σ D , which characterizes the probability distribution of the signal when its true value is equal to L D [26]. By the hypothesis testing theory, we obtain the following relation: Mathematically, the limit of detection is given as: If equation 2 is substituted in equation 4, we obtain: Where K β and β, are linked with the one sided tails of the distribution of the limit of detection corresponding to probability levels, 1-β. Finally, the defining relation for the limit of quantification (L Q ) is: Where K Q =1/RSD Q , and σ Q equals the standard deviation of L when L= L Q . Summarizing, the levels L C , L D and L Q , are determined by the error structure of the measurement process, the risks α and β and the maximum acceptable relative standard deviation for quantitative analysis. L C is used to test an experimental result, whereas L D and L Q refer to the capabilities of the measurement process itself. The relations among the three levels and their significance in analysis are shown in Figure 4. α, β and K Q can be chosen by the analyst according to the detection and quantification needs; of particular interest is when α=β, and σ=constant, in that circumstances K α = K β =K, and σ D= σ 0 L C =Kσ 0 (7) And because σ=constant, σ 0 =σ D =σ, relation 7, can be written as: The ability to quantify is expressed in terms of the signal or analyte that will produce estimates having a specified relative standard deviation (RSD), commonly 10%. That is L Q =K Q σ Q =10σ 0 (15) Where, L Q is the limit of quantification, σ Q , the standard deviation at the limit of quantification and K Q is the multiplier whose reciprocal equals the selected RSD; the IUPAC default value is 10. It's possible to transform all those expressions from the signal domain to concentration domain, and vice versa through the slope of the calibration curve.
The above relations represent the simplest possible case, based on restrictive assumptions. Actually, some of them are questionable as the assumption of normality in the blank measures and that σ is constant along the region of the critical level and limit of detection. They must not be taken as the defining relations for detection and quantification capabilities; being the defining relations, equation one, three and six for the critical level, the limit of detection and the limit of quantification respectively. Finally, for chemical measurement at least, the Fundamental contributing factor to the detection and quantification performance characteristics is the variability of the blank.
Currie's hypothesis testing schema, in spite of being theoretically solid, is very broad in scope, being independent of noise, the methodology to perform the measurements, conditions, etc.
In fact his schema does not even have a connection to the calibration curve methodology or even the substance that would be measured.
A particular problem for the calculation of the limit of detection within the field of gas chromatography is the calculation of the deviation of the blank. It has been suggested to use the measurement of 20 blanks and calculate its standard deviation. Other authors suggest measuring the noise at the baseline of one chromatogram in a region near the analyte peak [21]. Nevertheless, questions arise about the sets of the integration parameters, the region of the baseline which should be used to calculate the blank variability and the presence of interfering substances. This makes the determination of s 0 subjective and highly variable and has a major drawback in using the IUPAC definition in dynamic systems such as chromatography [23]. In order to introduce the calibration curve in the limit of calculation, other approaches have been developed to calculate the detection capabilities of a CMP.

Hubaux-Vos approach
In 1970, Hubaux and Vos suggested how Currie's schema could be implemented with calibration curve methodology based on CMP, with homoscedastic (σ=contant), Gaussian noise, ordinary least square processing of the calibration curve data and ordinary least square prediction intervals [32]. Since then, Hubaux and Vos' treatment has generally been assumed to be fundamentally correct.
Hubaux and Vos made a series of assumptions to develop their approach, It was assumed that the standards of the calibration curve are independent, that the deviation is constant through the calibration curve, the contents of the standards are accurately known and above all, the signals of all the points in the calibration curve have a Gaussian distribution ( Figure 5).
Then Hubaux and Vos drew two confident limits on either sides of the regression line with a priori level of confidence ( Figure 5), 1−α−β. The regression line and its two confident limits can be used to predict with a 1−α−β probability, likely values for signals. The confidence band can be used in reverse, for a measured signal y of a sample of unknown content, it is possible to predict the range of its content (x max -x min ) ( Figure 5).
To our subject, a signal equal to y C is of interest (Figure 5), where the lower limit of content is zero. Signals equal or lower than y C have a probability bigger than α due to a blank, and hence cannot be distinguished from a blank signal. y C is the lowest measurable signal and therefore corresponds to the critical level L C of Currie. More exactly y C is an estimate of L C . Because this limit concerns signals, it is used to posteriori decisions.
If we trace a line from y C to the lower confident limit and then to the x axis ( Figure 5), the value x D can be obtained, which is the lowest content it can be distinguished from zero. This value is inherent to the CMP and can be used as a priori limit, thus is equivalent to the limit of detection L D of Currie. It is important to clarify that the regression line and its confident limits are estimates of the real values. Consequently, the values of y C and x D are estimates too [32]. Figure 5. The linear calibration line, with its upper and lower confidence limits. y c is the decision limit and x D the detection limit [33].
One serious problem with the Hubaux-Vos approach is the non-constant widths of the prediction interval which contradicts the assumption of homoscedasticity; another problem is, because y C and x D are estimates, this method requires the generation of multiple calibration curves to calculate the mean of y C and x D .

Propagation of errors approach
In the hypothesis testing approach, the value of the limit of detection L D depends only on the variability of the blanks σ 0 . The propagation of errors approach considers the standard deviation of the concentration s X . This value is calculated by including the standard deviations of the blank s 0 , slope s m , and intercept s i , in the equation [25]. The contribution of the variability of slope, blank and intercept to the variability of x is expressed by the formula: The standard deviation of the concentration is equal to s D , and because it was assumed that, the standard deviation is constant along the region of interest s 0 =s D =s, it can substitute s 0 in any of Currie relations. If we substitute equation 16 in equation 13, and we assume the blank´s signal is set to zero the following relation is obtained.
Where K is a constant related to the degree of error, the analysts assume. The mathematical expressions for s 0 , s i and s m , can be found in [25] and publications specialized in statistics.
Experimentally, it has been found that the IUPAC approach, based exclusively on the blank variability, in most cases, gives lower values of L D than the propagation of error approach, which, besides the errors of the blank, takes into account errors in analyte measurement (slope and intersect). Consequently, the propagation of errors approach gives More realistic values of L D and consistent with the reliability of the blank measures and the signal measures of the standards. In the literature, the propagation of errors is preferred in many chemistry fields [25].
In order to calculate the limit of detection with the propagation of errors approach, it is necessary to make a minimum of five calibration curves to be able to measure s i and s m properly, All of these calibration curves would have to be prepared by fortifying control samples with the analyte of interest at concentrations around an estimated limit of detection. This would make the procedure cumbersome for dynamic systems such as chromatography [23].

Root mean square error approach
In this approach, the root mean square error (RMSE) is used instead of the standard deviation of the blank σ 0 in equations 7, 9 and 15, corresponding to L C, L D and L Q , respectively. In order to calculate the LOD, it is necessary to generate a calibration curve, from which the values of the slope (m) and intersect (i) are obtained. From these values and the equation of the calibration curve a predicted response is calculated (y p ), and then the error associated with each measurement: Then the sum of the square of the errors is calculated for all the points of the calibration curve, and finally the RMSE.
Since the RMSE is calculated from a calibration curve, this approach uses both the variability of the blank and of the measurements. For dynamic systems, such as chromatography with autointegration systems, RMSE is easier to measure and more reliable than σ 0 [23].

The t 99sLLMV approach
This method to calculate the MDL analyzed seven fortified samples with amounts of analyte close to an estimated limit of quantitation (ELOQ) or the lowest limit of method validation (LLMV) and its standard deviation sELOQ/LLMV was calculated. This value in a way is a substitute of σ D in Currie's definitions. Because this approach was developed by EPA, it is used to determine the MDL with the relation: Where, t 99n-1 is the one sided Student's t for N-1 observations (six degrees of freedom in our case) at the 99% confidence level. In this case t 99n=1 equals to 3.143.
However, it is extremely important that the ELOQ be accurately determined, because the fortification concentration greatly influenced the final value of MDL and MQL determined by this approach. EPA recommends that if the calculated MQL is significantly different from the ELOQ, the procedure has to be repeated with the calculated MQL as the new ELOQ, and MDL and MQL should be recalculated. This should be done until the calculated values are in the range of the estimated values. This approach is considered a fairly accurate way to determining method detection limits [23]. This approach is similar in some aspects to the so-called empirical method [24,33], where increasingly lower concentrations of the analyte are analyzed until the measurement do not satisfy a predetermined criteria.

Baseline noise approach
The IUPAC and propagation of errors approaches were developed for spectroscopic analysis. Nearly all concepts used in this approach have an equivalent in chromatography, except the interpretation and measurement of S 0, It has been proposed that the chromatographic baseline is analogous to a blank and S 0 must represent a measure of the baseline fluctuations [21,23].
Therefore, in order to calculate the LOD and LOQ it is necessary to measure the peak-to-peak noise (N p-p ) of the baseline around the analyte retention time. N p-p can be related to the standard deviation of the blank through the relation [21]: In spite of being the simplest path to determine the detection capabilities of a chromatographic method, this approach is not recommended because it is very dependent on analyst interpretation since, there is no agreement on where to measure the noise and the extension of baseline that has to be measured. Therefore, the obtained results show great variability between laboratories and even between analysts and consequently, they are hard to compare.

Conclusions
The limit of detection is an important figure of merit in analytical chemistry. It is of the utmost importance, in the development of methods to test the detection capabilities of a method and although it is not necessary to calculate it in the process of validation of all methods. It finds applications in areas such as environmental analysis, food analysis and areas under great scrutiny such as forensic science, etc.
Although the detection limit concept is deceptively simple, little is understood by the chemistry community. This caused the proliferation of terms relating to the detection capabilities of a method with different approaches for its determination and impeded efforts to harmonize the methodology.
This theory states: • Limits of detection are actual true values, which can be determined.
• Both Limits are chemical measurement process (CMP) performance characteristics, and therefore, involve all the phases of the analysis. Consequently should not be confused with terms referred exclusively to the detection capabilities of the instrument like IDL.
• Detection limits are not associated with any particular outcome, they are a priori limit • The existence of both type I errors (false positives) and type II errors (false negatives).
• Detection decision is based on the other hand on a posteriori limit, the critical value.
• Detection limits should not be confused with sensibility, which is the slope of the calibration curve.
In developing the limit of detection theory, Currie made a series of assumptions. First, the measurement distribution of the blanks follow a normal distribution, which is questionable at low concentrations, and secondly, in order to obtain simplified relations, the standard deviation is constant over the range of concentrations studied (homoscedasticity). In this specific circumstances the detection capabilities of a method depends exclusively on blank variability [20,26,[30][31].
Even Hubaux-Vos prediction bands with their non-uniform width proves that the assumption of homoscedasticity is false. Currie and other authors [26,28,[30][31] have addressed this problem, but stated that if the standard deviation increases too sharply, limits of detection may not be attainable for the CMP in question.
Although in Currie´s simplified relations, limits of detection exclusively depend on blank variability; other sources of error can be introduced in making the transition from the signal domain to the concentration domain through the uncertainty of the slope and intercept of the calibration curve. To take this into account several approaches have been developed like the propagation error approach, Hubaix -Vos, RMSE, etc [23][24][25][26]32]. Actually, the IUPAC approach which does not account measurement variability, usually gives artificially low values of limit of detection, while methods which account slope and intercept uncertainties, like the propagation error method and Hubaix-Vos method give more realistic estimates, consistent with the reliability of the blank measure and the signal measure of the standards.
In other words, Currie's simplified equations can only be valid, when all their assumptions are met (normal distribution, homoscedasticity, main source of error is the blank). To achieve this, a good knowledge of the blanks is needed to generate confidence in the nature of the blank distribution and some precision in the blank RSD is necessary; therefore an adequate number of full scale true blanks must be assayed through all the CMP.
Most of the assumptions of the IUPAC method are fulfilled in spectrophotometric analysis, for which it was developed, and where it has been used successfully. However, in the case of gas chromatography, where dynamic measures are carried out, and no practical rules are defined to measure the blank standard deviation; the error associated to the intersect of the calibration curve is not always negligible, and the presence of interferences is important. It is better to use a method that takes into account these sources of error. Therefore, for chromatographic techniques it is not recommended the IUPAC approach for the calculation of the detection capabilities of the method. Instead, the propagation of error, Hubaix-Vos, RMSE and t 99sLLMV approaches, which take into account the errors of the measurements of the analyte through a calibration curve, are recommended. A brief comparison of the different approaches for the determination of the detection capabilities of a CMP can be found in Table 4. Since several methods can be used, and could be a difference in the limit of detection calculated by them, it is important that when reporting values of limits of detection, the method used to define these values should be clearly identified in order to have meaningful comparisons.

Method
In order to properly determine the limit of detection and limit of quantification of a method, it is necessary to know the theory behind them, to recognize the scope and limitations of any approach, and be able to choose the method that better suits our CMP. The intention of this chapter is to review the fundamentals of detection limits determination for the purpose of achieving a better understanding of this key element in trace analysis, in particular and analytical chemistry in general; and to achieve a more scientific and less arbitrary use of this figure of merit with a view to their harmonization and avoid the confusion about them, which still prevails in the chemical community.