Open access peer-reviewed chapter

Practical Considerations on Indirect Calibration in Analytical Chemistry

Written By

Antonio Gustavo González

Submitted: 07 November 2016 Reviewed: 27 March 2017 Published: 05 July 2017

DOI: 10.5772/intechopen.68806

From the Edited Volume

Uncertainty Quantification and Model Calibration

Edited by Jan Peter Hessling

Chapter metrics overview

1,972 Chapter Downloads

View Full Metrics

Abstract

Indirect or methodological calibration in chemical analysis is outlined. The establishment of calibration curves is introduced and discussed. Linear calibration is presented and considered in three scenarios commonly faced in chemical analysis: external calibration (EC) when there are no matrix effects in the sample analysis; standard addition calibration (SAC) when these effects are present and internal standard calibration (ISC) in cases of intrinsic variability of the analytical signal or possible losses of the analyte in stages prior to the measurement. In each kind of calibration, the uncertainty and confidence interval for the determined analyte concentration are given.

Keywords

  • external calibration
  • standard addition method
  • internal standard
  • uncertainty measurement

1. Introduction

Direct absolute methods such as gravimetry, titrimetry or coulometry (among others) are directly traceable to SI units. Thus, traceability of contemporary instrumental methods is accomplished by applying indirect calibration procedures. In a direct calibration, the value of the standard (reference value) is expressed in the same quantity as the measurement of the equipment (for instance, the calibration of an analytical balance). In an indirect calibration, the value of the standard is expressed in a quantity different from the output one, that is, the measurement and the measurand are different. This is the most common kind of calibration in chemical analysis, for example, the calibration of a spectrophotometric method. Accordingly, the indirect calibration in analytical chemistry, also known as methodological calibration, is the operation that determines the functional relationship between measured values and analytical quantities, characterizing types of analytes, and their amounts. In this chapter, the establishment and validation of the mathematical model for the calibration function will be studied and discussed as well as the habitual scenarios concerning interferences coming from the chemical environment (matrix effects) and physical/instrumental lack of control leading to signal modification (standard additions and internal standard methodology). Confidence intervals for the calculated analyte concentration will be outlined and discussed.

Advertisement

2. The calibration in analytical chemistry

Calibration, as previously defined, can be assimilated to a mathematical function, Y = f(x), where Y is the analytical signal or response corresponding to the analyte concentration x. The major analytical aim consists of finding this functionality. When applying absolute methods of analysis [1], where traceability is assured, such as gravimetry, titrimetry or coulometry, there is no need for indirect calibration. The analyte amount is evaluated from the analytical signal with the use of physicochemical constants (atomic mass, Faraday constant) and the concentration of the standardized titration solution in titrimetry, leading to a typical linear response model x = KY:

  • Gravimetry: x = G ( gravimetric factor) × Y(mass of weighing form)

  • Titrimetry: x = p ( stoichiometry ) × C ( titrant concentration ) × Y ( titrant volume )

  • Coulometry: x = Y ( total charge) n ( electrons transferred)F(Faraday constant)

But in the field of relative methods (the majority of instrumental ones), traceability is reached just by performing an indirect calibration, that is by establishing the relationship between the analyte concentration and the analytical response. There are some theoretical relationships [25] verified for special analytical techniques as depicted in Table 1.

Response function Reference Analytical technique
y = A + B x Beer's law Absorption spectroscopy
y = A + B log x Nernst's equation Electrochemistry
y = A x B Scheibe‐Lomaking [2] Atomic emission spectrometry
y = A + B x + C x 2 Wagenaar et al. [3] Atomic absorption spectrometry
y = A + B [ 1 e C x ] Andrews et al. [4]
y = A D 1 + ( x B ) B + D Rodbard four parameter Logistic equation [5] Immunoassay

Table 1.

Theoretical response functions used in some analytical instrumental techniques.

Nevertheless, in the common situations, the response function has to be empirically established by using standard analyte solutions. Many response functions exhibit linear zones, generally at low concentrations of analyte and other zones where a curvature appears, and in some cases, there are regions where the response signal is independent of the analyte concentration [6]. Analysts are interested to the portion of the response function where the variation of the analytical signal with the analyte concentration contains useful analytical information. This portion of response function with analytical interest for calibration purposes is called the calibration curve. From the calibration curve, the amount of analyte in an unknown sample is evaluated from interpolation. The calibration step is of utmost importance within the realm of method validation.

In many situations, the calibration curve is linear, and a calibration straight line is obtained. From the mathematical models applied for establishing the response function, the most straightforward, studied, and easy to handle is the linear one. Accordingly, the linear calibration model will be considered throughout this chapter.

In case of non‐linear response, there are several alternatives. The use of linearizing transformations is a common tool [7], but when this procedure does not work, curve‐fitting methods are chosen. The best procedure is to try with polynomials of degrees successively larger until the F‐test of residual variances indicates that the systematic error due to the lack of fit is negligible. If the plot has “N” points, the major degree polynomial to be used is of degree N−1. But the blind use of high‐order polynomial may lead to overfitting. This kind of fitting is solved by multilinear regression [8]. This technique sometimes fails because the coefficient matrix is nearly singular. To avoid this, we can use orthogonal polynomials. The use of these polynomials leads to a diagonal coefficient matrix, overcoming singularities, and simplifying calculations. The orthogonal polynomials commonly used in curve fittings are the Chebischev's polynomials [9] and the Forsythe ones [10].

Aside from the advantages and applications of orthogonal polynomials, they are not at all the ultimate weapon. Rice proposed rational polynomial functions of the type F ( x ) = i a i x i / i b i x i that present a higher flexibility than orthogonal polynomials for adjusting purposes [11]. Another approach is to fit the points to a curve consisting of several linked sections of different geometrical shapes. This is the basis of the spline functions. Cubic spline [12] is the most used. They approximate the data to a series of cubic equations. These cubic links overlap in p interpolation points called “knots,” and it is essential that splines show continuity at such points. This continuity applies to the spline function and its first derivatives. A total cubic spline has p−1 links, with four coefficients ( S = a + b x + c x 2 + d x 3 ). Thus, 4(p−1) coefficients have to be calculated. This technique has been successfully applied in radioimmunoassay, gas–liquid chromatography, and atomic absorption spectrometry [13].

The most usual technique for establishing a calibration straight line is the method of least squares. This consists of minimizing the function Q = ( Y i Y ^ i ) 2 where Y i is the observed value of the response function at a x i analyte concentration, and Y ^ i is the estimated response value according to the linear model Y = a + b x + ε ( Y ) or Y ^ = a + b x . The minimization Q a = 0 and Q b = 0 leads to the values of a, b as well as their variances and covariance [13].

Three main requisites must be fulfilled before using this method [14], namely:

  • The x variable is free from error ε ( x ) = 0 .

  • The error associated to Y variable, ε ( Y ) , is normally distributed, N(0,σ2).

  • The variance of response Y, σ 2 ( Y ) , remains uniform in the dynamic range of x (homoscedasticity).

In analytical calibrations, the analyte concentration is known with high accuracy and precision and, accordingly, the requirement (i) is accomplished. The condition (ii) is assumed by many researchers without a previous testing. There are several statistical assays for testing normality [13], and they should be performed before embarking in the fitting. Analysts have paid much more attention to the requirement (iii). In situations of heteroscedasticity (non‐constant variance), the method of least squares can be applied but by using the so‐called weighing factors [15], which are defined as w i = 1 σ 2 ( Y i ) . Thus, the function to be minimized now is Q = w i ( Y i Y ^ i ) 2 leading to expressions similar to the one obtained in simple linear regression. This is the weighted regression [13].

Let us assume that we deal with a situation often found in routine analysis where the three mentioned requirements are fulfilled. In the following, we consider the different scenarios we can face.

Advertisement

3. Metrological foundations on indirect calibration

Consider a new proposed analytical method which is applied to dissolved test portions of a given sample within the linear dynamic range of the linear analytical response (Y). This response may be expressed by the following linear relationship involving both analyte and matrix amounts [16]:

Y ^ = A + B x + C z + D x z E1

where Y ^ is the estimated analytical response and A, B, C and D are constants.

A is a constant that does not change when the concentrations of the matrix, z, and/or the analyte, x, change. It is obviously related to the constant error blank correction. The blank must account for signals coming from reagents and solvents used in the assay as well as any bias resulting from interactions between the analyte and the sample's matrix. It is well known that the calibration blank and the reagent blank compensate for signals from reagents and solvents, but neither of them can correct for a bias resulting from an interaction between the analyte and the sample's matrix. The suitable blank must include both the sample's matrix and the analyte, and so it must be determined using the sample itself. The term A is called the true sample blank and can be estimated from the Youden sample plot, which is defined as the “sample response curve” [17]. Thus, by applying the selected analytical method to different test portions, namely m (a different mass taken from the test sample), different analytical responses Y are obtained. The plot of Y versus m is the Youden sample plot, and the intercept of the corresponding regression line is the so‐called total Youden blank (TYB) which is the true sample blank [1719]. However, when a “matrix without analyte” is available, the term A can be determined by evaluating the system blank (calibration and reagent blank).

Bx is the essential term that justifies the analytical method because it directly deals with the sensitivity to the presence of analyte.

Cz refers to the signal contribution from the matrix, depending only on its amount, z. When this term occurs, the matrix is called interferent. This contribution must be absent, because a validated analytical method should be selective enough with respect to the potential interferences appearing in the samples where the analyte is determined. Accordingly, the majority of validated methods do not suffer from such a direct matrix interference.

Dxz is an interaction analyte/matrix term. This matrix effect occurs when the sensitivity of the instrument to the analyte is dependent on the presence of the other species (the matrix) in the sample [20]. For the sake of determining analytes, this effect may be overcome by using the method of standard additions as we consider later.

Thus, the calibration function remains as:

Y ^ = A + B x + D x z E2

This function has to be established by using standards and could be applied to samples according to different methodologies. Calibration standards are prepared from primary standards containing the analyte or a surrogate, that is, a pure substance equivalent to analyte in chemical composition, separation and measuring that is taken as representative of the native analyte. It must be absent in the sample. Commonly, a surrogate is used in an internal methodology and in this case is termed as internal standard (IS) [21].

Three different scenarios can be considered for establishing the calibration function in order to determine the analyte in the sample: the external calibration (EC) (applicable when there is no matrix effect); the standard addition calibration (SAC) (used when matrix effect is present); and the internal calibration (IC) (applied for compensate uncontrolled analytical signal variations). These methodologies are outlined in the following section.

Advertisement

4. The external calibration

The external calibration (EC) is the most commonly used calibration methodology. It is named so because the calibration standards are not made up of the sample test portion. Instead, they are prepared and analysed separately from samples [21]. Accordingly, the signals recorded accounts for the analyte added as primary standard, reagents, solvents and other agents according to the analytical procedure, except the sample matrix. Accordingly, because EC is established in a free matrix environment, it can be applied for analyte determination only when sample matrix effects are absent. Thus, as a preliminary step within the method, validation to assess constant and proportional bias due to matrix effects has to be performed [22]. Being a matrix free calibration scenario, z = 0, B is the slope of EC, b E C , and A can be taken as the system blank, a E C . In order to evaluate the goodness of the fit, the regression analysis of the analytical signal on the analyte concentrations established in the calibration set yields the calibration curve for the predicted responses. The simplest model is the linear one, very often found in analytical methodology, leading to predicted responses according to

Y ^ = a E C + b E C x E3

Eq. (3) must be checked for goodness of fit.

The correlation coefficient r = ( x i x ¯ ) ( Y i Y ¯ ) ( x i x ¯ ) 2 ( Y i Y ¯ ) 2 , although commonly used, especially in linear models, is not appropriate owing to the little value of this parameter for detecting curvature [23, 24]. In statistical theory, correlation is the measure of the association between two random variables, but in our case, x and Y are strongly related. Thus, there is no correlation in its mathematical sense. Values of r near +1 or −1 provide an environment of respectability but not much else. Some authors apply statistical tests for significance of the correlation coefficient, for instance, the student t‐test [13] t = | r | N 2 1 r 2 or the Fisher transformation [9] z = 1 2 ln ( 1 + r 1 r ) , but they cannot ward off danger because the null hypothesis is that the variables are uncorrelated (zero correlation), and accordingly, a small r value can be considered significantly different from r = 0. As Thompson [23] pointed out, “certainly it is true that, if the calibration points are tightly clustered around a straight line, the experimental value of r will be close to unity. But the converse is not true”. Thus, some more suitable criteria should be considered. A very simple way to prove that the linear model suitably fits the experimental data and is right for searching possible calibration pathologies is the analysis of residuals [8, 13, 25]. So, if the model is suitable, the residuals should be normally distributed. This can be assessed by plotting them on a normal probability graph. The presence of curvature reveals a lack of fit due to non‐linear behaviour. A residual segmented pattern may indicate heteroscedasticity in the data, and a weighted linear regression could be used.

Another parameter measuring the goodness of the fit is called the on‐line linearity [26] and is a parameter that measures the dispersion of the points around the calibration straight line and is evaluated as the relative standard deviation of its slope: on‐line linearity = R S D b E C = s b E C b E C . The typical critical threshold for considering a suitable linear model is R S D b 0.05 .

Nevertheless, the best way to test the goodness of fit is by comparing the variance of the lack of fit against the pure error variance [27]. For an adequate assess of the lack of fit of the linear model, a suitable experimental design for performing the calibration is needed as indicated in the following [28]:

  1. At least six calibration points spaced over the concentration range of the method scope are required for establishing the calibration straight line.

  2. Calibration standards should be measured over 5 days for suitably covering the possible sources of uncertainty.

  3. Each calibration standard should be measured in triplicate to account for pure error variance.

From these data, we can test homoscedasticity. Accordingly, we have a triplicate of responses for each calibration standard and hence an estimation of the pure error variance of the response is available at each calibration point. We can apply the Cochran’s assay because the number of observations is same for all concentration levels of analyte. Thus, if the number of calibration standards is N and they are replicated n times, the Cochran statistics is calculated as:

C = s max 2 i = 1 N s i 2 E4

where s i 2 is the response variance at the concentration level i and s max 2 is the maximum variance. This value is compared against the critical tabulated value C t a b ( N , n , P ) , P being the selected confidence level. If C C t a b , then the response variances can be considered to be uniform across the range of analyte concentrations, and an estimated pooled sum of squares due to pure errors, SSPE, can be obtained:

S S P E = i = 1 N j = 1 n ( Y i j Y ¯ i ) 2 = n 1 N i = 1 N s i 2 E5

The residual sum of squares of the model SSR is given by

S S R = i = 1 N j = 1 n ( Y i j Y ^ i j ) 2 = i = 1 N n ( Y ¯ i Y ^ i ) 2 E6

where Y i j is the recorded analytical signal of the calibration point i at the replication j and Y ^ i j . This value can be split into two terms: the sum of squares corresponding to pure error (SSPE) and the sum of squares corresponding to the lack of fit (SSLOF):

S S L O F = S S R S S P E E7

The pure error variance is SSPE/(n−1), and the variance of the lack of fit, by considering N−2 degrees of freedom for SSR, is SSLOF/(Nn−1). So, for assessing the adequacy of the model, the Fisher F‐test is applied:

F = ( S S R S S P E ) / ( N n 1 ) S S P E / ( n 1 ) E8

The calibration model is considered suitable if less than the one‐tailed tabulated value F t a b ( N n 1 , n 1 , P ) exists at a P given confidence level.

Once the model is adequate for application, analyte determination is carried out by interpolating the analytical signal of the sample in the calibration model. Typical statistical calculations for evaluating the variances of slope, intercept, its covariance as well as the uncertainty associated to the estimated analyte concentration can be found in several texts, for instance, Miller and Miller [13]. Thus, if Y0 is the response signal recorded by applying the analytical method on the sample, the concentration of native analyte, x0, is given by

x ^ 0 = Y 0 a E C b E C E9

In order to evaluate its standard deviation, and the corresponding expanded uncertainty, the theorem of variance propagation is applied. The propagation of variance is the common approach for evaluating the uncertainty of indirect measurements according to the current edition of the guide for the expression of uncertainty measurement (GUM). However, an essential limitation has to be taken into account. The non‐linearity of the function (here the calibration function) must be negligible. This is fundamental because the function is expanded in a Taylor series, and then, it is truncated by neglecting second‐ and higher‐order terms. To avoid this drawback, the propagation of distributions instead of the propagation of variance is a very suitable way for estimating the measurement uncertainty. The application of Monte‐Carlo method to carry out the propagation of distributions is very effective [29].

Saying that brute-force Monte-Carlo (MC) methods are “very effective” may seem strange to some readers, as one major problem of MC is their methodological in-efficiency. It is due to large sampling variance of the relatively small samples acceptable in computationally demanding applications. In other words, any acceptable sample of 100 values may have a large random unknown error, generally different from any other sample of comparable size. To overcome this inefficiency, approximate simplified surrogate models are often used to allow for sampling a much as 106 times, just to reduce sampling variability. I would thus rather call MC methods ‘general’, ‘useful’, ‘simple’ and ‘powerful’ etc., as they apply to any parametric model and any distribution (if a random generator can be found), and can be utilized by anybody with very little statistical training.

But in our case, where the calibration function has been considered linear, the use of theorem of variance propagation can be applied without risks:

s x 0 2 = ( x 0 Y 0 ) 2 s Y 0 2 + ( x 0 a E C ) 2 s a E C 2 + ( x 0 b E C ) 2 s b E C 2 + 2 ( x 0 a E C ) ( x 0 b E C ) cov ( a E C , b E C ) = ( 1 b E C ) 2 s Y 0 2 + ( 1 b E C ) 2 s a E C 2 + ( Y 0 a E C b E C 2 ) 2 s Y 0 2 + 2 ( 1 b E C ) ( Y 0 a E C b E C 2 ) ( x ¯ s b E C 2 ) E10

Considering the following equivalences (see Ref. [13] for instance):

s Y 0 2 = s R 2 = S S R / ( N 2 ) s a E C 2 = s R 2 ( 1 N + x ¯ 2 S x x ) s b E C 2 = s R 2 S x x S x x = i ( x i x ¯ ) 2 E11

After some algebraical manipulations, we get

s x 0 2 = s R 2 b E C 2 [ 1 + 1 N + ( Y 0 Y ¯ ) 2 b E C 2 S x x ] E12

If the signal Y0 is obtained as the average of m measurements, we have

s x 0 2 = s R 2 b E C 2 [ 1 m + 1 N + ( Y 0 Y ¯ ) 2 b E C 2 S x x ] E13

And the corresponding expanded uncertainty can be evaluated from the tabulated Student t‐statistics or by assuming a Gaussian distribution and using the z score at a given confidence level (generally P = 95%) and so:

U x 0 = t t a b ( N 2 , 95 % ) s x 0 U x 0 = z 95 % s x 0 2 s x 0 Confidence Interval: x 0 ± U x 0 E14

EC is adequate for analytical procedures that could be considered as methods free from matrix effects, but it has the main limitation coming from the assumption that the different environments (matrices) of the calibration standards (solvent, buffer,…) and of the samples are equivalent, and they have no effect on the calibration function [21]. If this assumption is incorrect, additive and/or proportional systematic errors may appear. Accordingly, in a preliminary stage within the method validation, constant and proportional bias due to matrix effects must be investigated with the help of standard addition calibration and Youden plot [22].

Advertisement

5. Standard addition calibration

The standard addition calibration (SAC) or standard addition method was originally proposed in 1937 by Hans Hohn in polarographic studies [30]. He used this strategy in order to avoid the matrix effects on the intensity of emission signal, and nowadays, it is widely used in chemical analysis. SAC can be applied with three fundamental goals [31]:

  • To determine analytes in samples where the analyte‐matrix interactions lead to inaccurate results when the EC is used.

  • To determine analytes where the content in the sample is smaller than the quantitation limit but within the range of analytical sensitivity.

  • To check the accuracy of an analytical result when no reference materials or reference method is available (recovery assay).

In essence, the calibration for the two first purposes comprises three steps [32]:

  1. Measure the analytical response produced by the test solution.

  2. Spike the test solution with one or more amounts of analyte to get corresponding solutions and measure the new responses.

  3. From the responses, calculate a straight‐line fit of the experimental data and from that evaluate the concentration that produced the response obtained from the untreated test solution.

The SAC can be performed either at a final fixed volume or at a variable volume [19]. In this discussion, we only consider the first case by working at constant final volume.

Consider now the application of the analytical procedure to a dissolved test portion of an unknown sample within the linear working range. The analyte concentration x is the sum of the fixed native concentration coming from the sample (volume of test portion V0) and the variable spiked concentration (spiked volume, Vspike) and keeping a final constant volume V. The amount of matrix in the test portion (z) is constant. Accordingly, the analytical response can be now modelled as:

Y ^ = A + B x + D x z = A + ( B + D z ) x = A + ( B + D z ) ( V 0 C n a t i v e 0 + V s p i k e C s p i k e 0 V ) = A + ( B + D z ) C n a t i v e + ( B + D z ) C s p i k e = a S A C + b S A C C s p i k e E15

where Cnative is the actual concentration of the analyte in the unspiked sample, Cspike the actual concentration of the spiked analyte and aSAC and bSAC are the intercept and the slope of the SAC calibration straight line. If we try to estimate the analyte concentration of a spiked sample by using the external calibration line, we obtain an estimation of the total observed analyte concentration:

C ^ o b s = Y ^ a E C b E C = a S A C a E C + b S A C C s p i k e b E C E16

For the unspiked sample, Cspike = 0, we obtain

C ^ n a t i v e = a S A C a E C b E C E17

According to Eqs. (16) and (17), the spiked concentration of the analyte is estimated from the external calibration as:

C ^ s p i k e = C ^ o b s C ^ n a t i v e = b S A C C s p i k e b E C E18

From Eq. (18), an overall estimation of the overall consensus recovery is calculated as:

Rec = C ^ s p i k e C s p i k e E19

When proportional bias is absent, we have bSAC = bEC, and that implies Rec = 1. This must be tested for statistical significance by using the student t‐test [22]:

t = | Rec 1 | s Rec E20

with the recovery standard deviation given by:

s Rec = s b S A C 2 b E C 2 + b S A C 2 s b C E 2 b E C 4 E21

Thus, if the degrees of freedom ν corresponding to the uncertainty of consensus recovery are known, student t‐statistic is compared with the critical two‐tailed tabulated value, ttab(ν,P), at P% confidence. If tttab, the consensus recovery is not significantly different from 1. Alternatively, instead of ttab, a coverage factor k taken as z score may be used for the comparison. Typical values are k = 2 or k = 3 for 95 or 99% confidence, respectively [22], so

  • if | Rec‐1 | s Rec k , the recovery is not significantly different from 1.

  • if | Rec‐1 | s Rec > k , the recovery is significantly different from 1, and the results have to be corrected by Rec.

Although recovery is sometimes considered a separate validation parameter, it should be established as a part of method validation because it is directly related to the trueness assessment [33]. Aside from the statistical testing considered above, the Association of Official Analytical Chemists (AOAC) has published tables of acceptable recovery percentages as a function of the level of analyte in the sample (see Table 1 of [22]). The relative uncertainty for proportional bias owing to matrix effects is taken as s Rec Rec according to SAC.

The relationships between the analytical signal and the analyte concentration when a matrix effect is present are given by Eq. (15). The independent term “A” is the total Youden blank, which is included in the intercept of the SAC calibration (aSAC = A + bSACCnative). The Youden's plot [1719] consists of plotting the analytical response (Y) against the amount of the test portion taken for analysis:

Y ^ = A + b Y w s a m p l e E22

The intercept of this plot is an evaluation of the TYB, which is the sum of the system blank (SB) corresponding to the intercept of the EC (aEC) and the YB associated with the constant bias in the method [13]. Thus, we can equate TYB = A, SB = aEC and YB = AaEC. We can define the method constant bias as:

θ = A a E C b E C E23

The uncertainty of the constant bias can be obtained by the law of variance propagation [22]:

s θ = s A 2 b E C 2 + s a E C 2 b E C 2 + ( A a E C ) 2 s b E C 2 b E C 4 + 2 ( A a E C ) b E C 3 cov ( a E C , b E C ) E24

The variances s a E C 2 , s b E C 2 and the covariance are obtained from the statistical parameters of the EC straight line and s A 2 from the Youden's plot. Once s θ is calculated, the constant bias may be assessed for significance as in the case of recovery.

  • If | θ | s θ k , the constant bias is not significantly different from 0.

  • If | θ | s θ > k , the constant bias is significantly different from 0, and the results have to be corrected by θ .

Accordingly, if after performing the assessment of proportional and constant bias matrix effects are present, the uncorrected result x0, found by EC, must be suitably corrected as

x 0 = x 0 u n c o r r θ Rec E25

Another way of getting the correct result from the reading of analytical signal Y0 is

x 0 = Y 0 A b S A C E26

On the other hand, when using the SAC for evaluating the analyte concentration x0 of a sample, its standard deviation can be obtained by applying the theorem of variance propagation to the function

x 0 = a S A C A b S A C E27

leading to

s x 0 2 = ( x 0 A ) 2 s A 2 + ( x 0 a S A C ) 2 s a S A C 2 + ( x 0 b S A C ) 2 s b S A C 2 + 2 ( x 0 a S A C ) ( x 0 b S A C ) cov ( a S A C , b S A C ) = ( 1 b S A C ) 2 s A 2 + ( 1 b S A C ) 2 s a S A C 2 + ( ( a S A C A ) b S A C 2 ) 2 s b S A C 2 2 ( 1 b S A C ) ( ( a S A C A ) b S A C 2 ) x ¯ s b S A C 2 E28

After several algebraical manipulations, we obtain

s x 0 2 = s y / x 2 b S A C 2 [ 1 N + Y ¯ 2 b S A C 2 S x x + A ( A 2 Y ¯ ) b S A C 2 S x x ] + s A 2 b S A C 2 E29

But many workers apply the SAC without considering the true blank, that is, by setting A = 0 and s A = 0 leading to

s x 0 2 = s y / x 2 b S A C 2 [ 1 N + Y ¯ 2 b S A C 2 S x x ] E30

This expression is presented in several standard analytical textbooks, for instance [13, 19, 32, 34]. However, Ortiz et al. [35] pointed out that when extrapolating, the analyte concentration is obtained by setting Y 0 = 0 and calculating x 0 = a S A C / b S A C , but even in this case, the uncertainty of the signal must be included in calculations, leading to

s x 0 2 = s y / x 2 b S A C 2 [ 1 + 1 N + Y ¯ 2 b S A C 2 S x x ] E31

The SAC, as it has been outlined, is considered as an extrapolation method but an interpolation approach is available [32, 36]. A plot of the data obtained from SAC and how the analyte concentration is predicted by extrapolation are depicted in Figure 1. Nevertheless, an interpolation alternative is also gathered there. The latter is discussed in the following.

Figure 1.

The plot of extrapolation and interpolation for prediction of the native analytical concentration of a sample by using the SAC.

What value of the analytical signal Y0 will correspond to a spiked x value that is equal to the concentration of the native analyte? That is:

Y 0 = a S A C + b S A C x 0 = A + 2 b S A C x 0 = 2 Y u n s p i k e d A E32

And if we disregard the true blank, we get

Y 0 = a S A C + b S A C x 0 = 2 b S A C x 0 = 2 Y u n s p i k e d E33

Thus, the native analyte concentration can be obtained by interpolation by setting the analytical signal for the sample as the double of the signal, corresponding to the unspiked sample minus the true blank:

x ^ 0 = Y 0 a S A C b S A C E34

This leads to a variance for the native analyte

s x 0 2 = s y / x 2 b S A C 2 [ 1 + 1 N + ( Y 0 Y ¯ ) 2 b S A C 2 S x x ] E35

According to Andrade et al. [32, 36], the use of extrapolation in the SAC is a risky practice because it may lead to biased prediction and uncertainties substantially different from interpolation. Confidence interval from extrapolation is always higher than those obtained by interpolation.

Advertisement

6. The internal standard calibration

The method of internal standard calibration (ISC) was first applied in the 1950s in several analytical fields [3741]. This method is especially useful when the analytical response varies slightly from run to run due to different causes, for instance:

  • Temperature fluctuations in atomic emission spectrometry.

  • Changes in the capillary characteristics in polarography.

  • Inhomogeneities in the effective magnetic field due to shielding effect in nuclear magnetic resonance (NMR).

  • Variability in the injection volume in gas chromatography (manual injection).

  • Irreproducibility of automatic injectors in capillary electrophoresis.

  • Differences in the nature of particulate matter in the sample in X‐ray fluorescence.

The use of an internal standard is also needed for analytical methods where there are multiple sample preparation steps, especially when volumetric recovery at each step may vary (extraction with separation cartridges) or when involving chemical derivatizations with low or variable yields of reaction.

An internal standard is a substance different from the analyte but that has physicochemical properties very similar to the analyte. Evidently, the internal standard cannot be a component of the sample.

It is added to the sample, and the patterns in known amounts and the signal produced by both the analyte and the internal standard are measured. If in repeated measurements, there is signal oscillation, it will occur both in the analyte and in the internal standard, and the ratio of the signals of both will not change.

Thus, instead of the response Y, the ratio of responses Y/YIS is used in the calibration procedure. Assuming that in the instrumental method the signal is in direct proportion to the analyte and internal standard, we get:

Y = k x Y I S = k I S x I S ( Y Y I S ) = F ( x x I S ) E36

Here, Y is the analytical signal due to the analyte and YIS is the analytical signal corresponding to the internal standard. The calibration straight line is performed as in EC by preparing standards at several analyte concentrations and with the same concentration fixed for internal standard xIS. Thus, the calibration constant F is evaluated. Whereas the dispersion of the calibration straight line Y = k x may be significant, the one obtained with the ISC is negligible.

The sample is then treated in the same way by spiking the internal standard at the same concentration in the standards. Thus, if the reading of the sample is ( Y 0 Y I S 0 )

x ^ 0 = ( Y 0 Y I S 0 ) x I S F E37

By applying the variance propagation law and considering negligible variance of xIS, we get

s X 0 = x I S s R F 1 + x I S 2 ( Y 0 Y I S 0 ) 2 F 2 x i 2 E38

The main advantage of ISC is that this quantification method does not need a previous calibration because it is implicit in the quantification [21]. Accordingly, the use of one‐point calibration method can be used. It only requires the addition of known and equal amounts of internal standards to the standard analyte solution and to sample solution and measures the analytical signals of analyte and internal standard in the standard and in the sample. Evidently, the signals of analyte and internal standard must be distinguishable without overlapping.

Thus,

( Y s t d Y I S s t d ) = F ( x s t d x I S ) and ( Y 0 Y I S 0 ) = F ( x 0 x I S ) x ^ 0 = x I S ( Y 0 Y I S 0 ) ( Y s t d Y I S s t d ) E39

Another exclusive feature of ISC is the possibility of performing the quantification of several analytes of the same chemical family in the same test portion and in a unique internal calibration with a single internal standard. Consequently, it could be possible to evaluate the mass fraction of each analyte according to [21].

% x 0 i = x 0 i j ( x 0 j ) 100 = ( x I S F ) ( Y i 0 Y I S 0 ) j ( x I S F ) ( Y j 0 Y I S 0 ) 100 = Y i 0 j Y j 0 100 E40

Accordingly, ISC is a very powerful method for congener analysis (for instance in fat analysis, determination of waxes, sterols, aliphatic alcohol and so on) by using only a unique internal standard.

Advertisement

7. Synthesis

Indirect calibration is a key concept for method validation. Instrumental analysis involving indirect calibration is a common feature in routine analysis, and three typical scenarios can be found depending on the analyte‐matrix interaction and the uncontrolled variation of the analytical signal owing to intrinsic characteristics of the analytical process. Thus, when the interaction of the matrix of sample is negligible, the external calibration is the normal choice. Otherwise, the Standard Addition calibration together with the Youden plot have to be applied. In cases where there are non‐random signal variations run to run or possible analyte losses due to sample preparation procedures or derivatization reactions, Internal Standard calibration must be considered. These three approaches have been outlined and discussed. Uncertainty values for the analyte concentration coming from the calibration step are considered and evaluated from the calibration data.

References

  1. 1. Hulanicki A. Absolute methods in analytical chemistry. Pure and Applied Chemistry. 1995;67:1905–1911
  2. 2. Boumans PWJM. Theory of Spectrochemical Excitation. London, UK: Hilger & Watts; 1966. p. 383
  3. 3. Wagenaar HC, Novotny I, Degalan L. Influence of hollow‐cathode lamp line‐profiles upon analytical curves in atomic absorption spectroscopy. Spectrochimica Acta Part B—Atomic Spectroscopy B. 1974;29:301–317
  4. 4. Andrews JAS, Jowett A. A numerical aid for evaluation of atomic absorption spectrometric results. Analytica Chimica Acta. 1982;134:383–388
  5. 5. O’Connell MA, Belanger BA, Haaland PD. Calibration and assay development using the four parameter logistic model. Chemometrics and Intelligent Laboratory Systems. 1990;20:97–114
  6. 6. McDowell LM. Effect of detector nonlinearity on the height, area, width and moments of peaks in liquid chromatography with absorbance detectors. Analytical Chemistry. 1981;53:1373–1376
  7. 7. Carroll RJ, Ruppert D. Transformations and Weighting in Regression. Dordrecht, The Netherlands: Elsevier; 1988. p. 249
  8. 8. Draper NR, Smith H. Applied Regression Analysis. 3rd ed. New York: Wiley; 1998. p. 706
  9. 9. Akhnazarova S, Kafarov V. Experiment Optimization in Chemistry and Chemical Engineering. Moscow: MIR; 1982. p. 312
  10. 10. Kragten J. Least‐squares polynomial curve fitting for calibration purposes (STATCALIBRA). Analytica Chimica Acta. 1990;241:1–13
  11. 11. Rice JR. The Approximations of Functions, Vol. 2, Non‐linear and Multivariate Theory. Reading, MA: Addison‐Wesley; 1969. p. 334
  12. 12. Ahlberg JH, Nilson N, Wash JL. The Theory of Splines and Their Application. New York: Academic Press; 1967. p. 284
  13. 13. Miller JN, Miller JC. Statistics and Chemometrics for Analytical Chemistry. 6th ed. Essex, UK: Pearson Education Limited; 2010. p. 278
  14. 14. Agterdenbos J. Calibration in quantitative analysis. 1. General considerations. Analytica Chimica Acta. 1979;108:315–323
  15. 15. Asuero AG, González AG. Some observations on fitting a straight line to data. Microchemical Journal. 1989;40:216–225
  16. 16. González AG, Herrador MA, Asuero AG. Intra‐laboratory testing of method accuracy from recovery assays. Talanta. 1999;48:729–736
  17. 17. Cardone MJ. New technique in chemical assay calculations. 2. Correct solution of the model problem and related concepts. Analytical Chemistry. 1986;58:483–445
  18. 18. Cardone MJ, Willavice SA, Lacy ME. Method validation revisited: A chemometric approach. Pharmaceutical Research. 1990;7(2):134–160
  19. 19. Harvey D. Analytical Chemistry 2.0. Electronic edition: http://www.asdlib.org/onlineArticles/ecourseware/Analytical Chemistry 2.0/Text_Files.html. 2016. p. 1133
  20. 20. Booksh KS, Kowalski BR. Theory of analytical chemistry. Analytical Chemistry. 1994;66:782A‐791A
  21. 21. Cuadros‐Rodríguez L, Bagur‐González MG, Sánchez‐Viñas M, González‐Casado A, Gómez‐Sáez AM. Principles of analytical calibration/quantification for the separation sciences. Journal of Chromatography A. 2007;1158:33–46
  22. 22. González AG, Herrador MA. A practical guide to analytical method validation, including measurement uncertainty and accuracy profiles. Trends in Analytical Chemistry. 2007;26:227–238
  23. 23. Analytical Methods Committee. Is my calibration linear? Analyst. 1994;119:2363–2366
  24. 24. González AG, Herrador MA, Asuero AG, Sayago A. The correlation coefficient attacks again. Accreditation and Quality Assurance. 2006;11:256–258
  25. 25. Belloto RJ, Sokoloski TD. Residual analysis in regression. American Journal of Pharmaceutical Education. 1985;49:295–303
  26. 26. Cuadros L, García AM, Bosque JM. Analytical Letters. 1996;29:1231–1239
  27. 27. Meloun M, Militky J, Forina M. Chemometrics for Analytical Chemistry. Vol. 2. Chichester, West Sussex, UK: Ellis Horwood; 1994. pp. 64–69
  28. 28. Thompson M, Ellison SLR, Wood R. Harmonized guidelines for single‐laboratory validation of methods of analysis. Pure and Applied Chemistry. 2002;74:835–855
  29. 29. Herrador MA, Asuero AG, González AG. Estimation of the uncertainty of indirect measurements from the propagation of distributions by using the Monte‐Carlo method: An overview. Chemometrics and Intelligent Laboratory Systems. 2005;79:115–122
  30. 30. Kelly WR, Pratt KW, Guthrie WF, Martin KR. Origin and early history of Die Methode des Eichzusatzes or the method of standard additions with primary emphasis on its origin, early design, dissemination, and usage of terms. Analytical and Bioanalytical Chemistry. 2011;400,1805–1812
  31. 31. Cuadros L, García AM, Alés F, Jiménez C, Román M. Validation of an analytical instrumental method by standard addition methodology. Journal of AOAC International. 1995;78:471–476
  32. 32. Andrade‐Garda JM, Carlosena‐Zubieta A, Soto‐Ferreiro RM, Terán‐Baamonde J, Thompson M. Clasical linear regression by the least square method. In: Andrade‐Garde JM, editor. Basic Chemometric Techniques in Atomic Spectroscopy. 2nd ed. Milton Road, Cambridge, CB4 0WF, UK: The Royal Society of Chemistry; 2013. pp. 52–117
  33. 33. Taverniers I, De Loose M, van Bockstaele E. Trends in quality in the analytical laboratory. II. Analytical method validation and quality assurance. Trends in Analytical Chemistry. 2004;23:535–552
  34. 34. Harris DC. Quantitative Chemical Analysis. 8th ed. New York, NY 10010: W.H. Freeman and Company; 2010. p. 874
  35. 35. Ortiz MC, Sánchez S, Sarabia L. Quality of analytical measurements: Univariate regression. In: Brown SD, Tauler R, Walczak B, editors. Chemometrics: Chemical and Biochemical Data Analysis. Vol. 1. Amsterdam, The Netherlands: Elsevier; 2009. pp. 127–169
  36. 36. Andrade JM, Terán‐Baamonde J, Soto‐Ferreiro RM, Carlosena A. Interpolation in the standard additions method. Analytica Chimica Acta. 2013;780:13–19
  37. 37. Bernstein RE. Serum potassium by internal standard flame photometry. Nature. 1950; 4199:649
  38. 38. Adler I, Axelrod JM. Internal standards in fluorescence X‐ray spectroscopy. Spectrochimica Acta A. 1955;7:91–99
  39. 39. Porter II JT. New method for polarographic standardization. Analytical Chemistry. 1957;29:1638–1639
  40. 40. Ray NH. Gas chromatography I. The separation and estimation of volatile organic compounds by gas‐liquid partition chromatography. Journal of Applied Chemistry (London). 1954;4:21–25
  41. 41. Dimbat M, Porter PE, Stross FH. Apparatus requirements for quantitative application of gas‐liquid partition chromatography. Analytical Chemistry. 1956;28:290–297

Written By

Antonio Gustavo González

Submitted: 07 November 2016 Reviewed: 27 March 2017 Published: 05 July 2017