Open access peer-reviewed chapter

Calibration Methods of Laser-Induced Breakdown Spectroscopy

By Hongbo Fu, Junwei Jia, Huadong Wang, Zhibo Ni and Fengzhong Dong

Submitted: July 5th 2017Reviewed: December 4th 2017Published: December 20th 2017

DOI: 10.5772/intechopen.72888

Downloaded: 1393


Laser-induced breakdown spectroscopy (LIBS) has gained great attention over the past two decades due to its many advantages, such as needless sample preparation, capability of remote measurement and fast multielement simultaneous analysis. However, because of its inherent uncertainty features of plasma, it is still a big challenge for LIBS community worldwide to realize high sensitivity and accurate quantitative analysis. Currently, many chemometric analytical methods have been applied to LIBS calibration analysis, including univariate regression, multivariate regression, principal component regression (PCR), partial least squares regression (PLSR) and so on. In addition, appropriate sample and spectral pretreatment can effectively improve the analytical performance (i.e., limit of detection (LOD), accuracy and repeatability) of LIBS. In this chapter, we briefly summarize the progress of these calibration methods and their applications on LIBS and provide our recommendations.


  • laser-induced breakdown spectroscopy
  • sample and spectral pretreatment
  • calibration methods
  • chemometrics
  • calibration-free laser-induced breakdown spectroscopy

1. Introduction

Laser-induced breakdown spectroscopy (LIBS), also sometimes called laser-induced plasma spectroscopy (LIPS), has developed rapidly as an analytical technique over the past two decades. LIBS is a kind of atomic emission spectroscopy, which uses a high-energy pulsed laser as the excitation source. The laser is focused on the sample surface, thereby evaporating and atomizing the sample and generating a plasma. The light emitted by the plasma is detected by a spectrometer. One can obtain sample composition and concentration information by analyzing the plasma emission spectra. LIBS most commonly used experimental instruments such as mainly lasers, spectrometers, detectors and computers, as shown in Figure 1.

Figure 1.

A schematic of a general apparatus for laser-induced breakdown spectroscopy illustrating the principal components.

The laser which is the most widely used in LIBS is the Nd:YAG solid-state lasers operated in the Q-switch mode. Typically, it is operated at the fundamental wavelength of 1064 nm, pulse energy is 30–100 mJ, pulse width is 5~15 ns and repetition rate is 1~10 Hz. In addition, the researchers tested the effects of lasers of different types and parameters on the LIBS. Trautner et al. [1] investigated polyethylene (PE) and a rubber material from tire production by employing 157 nm F2 laser and 532 nm Nd:YAG laser ablation in nitrogen and argon gas background or in air. The effects of laser wavelength on depth resolution of thin-film solar cell are investigated by Choi et al. [2] using an ultraviolet (λ = 266 nm) and a visible (λ = 532 nm) nanosecond Nd:YAG lasers. Labutin et al. [3] summarize nearly two decades of studies on femtosecond laser-induced breakdown spectrometry (fs-LIBS). Picosecond pulse train and nanosecond pulse were compared for laser ablation and LIBS measurements by Lednev et al. [4].

Spectrometers disperse the emitted radiation of the laser-induced plasma to get a spectrum in terms of intensity as a function of the wavelength. The dominant spectrometer types used for LIBS are multichannel fiber spectrometer and echelle spectrometer coupled with an intensified CCD. The echelle spectrometer offers a wide spectral range, a high spectral resolution, and the possibility of time-resolved. The plasma parameters (plasma temperature and electron density) are constantly changing with respect to the delay time, and an echelle spectrometer with time resolution is needed when calculating these parameters. However, time-resolved broadband spectrometers are expensive and strongly dependent on external circumstances. The multichannel fiber spectrometer is robust and reliable for the use in mobile and portable LIBS instruments, providing an accredited spectral resolution, but their integration time is typically much longer than the plasma lifetime.

In recent years, with the rapid development of lasers, spectrometers and detectors and the urgent demand of in situ and online analysis, LIBS has developed rapidly. Compared with many other types of elemental analysis techniques, LIBS has obvious advantages:

  1. Simple equipment: few instruments, low cost and easy integration.

  2. Noncontact analysis: LIBS uses pulsed laser as the excitation source, which makes it noncontact analysis, especially in the dangerous environment or space exploration field, has broad application prospects.

  3. No sample preparation: LIBS directly focused pulsed laser bombardment of the sample without processing the sample.

  4. Various samples: samples can be gas, aerosols, liquids and solids.

  5. Nondestructive analysis: The laser converges to the surface of the sample, and only a small amount of the sample is excited. It can be considered as nondestructive or near nondestructive.

  6. Three-dimensional analysis: LIBS can collect laser at different positions on the sample surface or repeat measurements at the same location to analyze the sample surface and its different depths of the sample composition and content.

  7. Total element analysis: The laser energy can simultaneously excite all the elements in the sample, so all elements in the sample can be analyzed simultaneously.

  8. Remote analysis: The long-distance analysis of the LIBS can be achieved by remotely transmitting the laser energy and collecting the plasma emission spectrum through the fiber.

  9. Online analysis: LIBS is a very fast technology that provides analytical results in seconds, making it particularly suitable for rapid analysis or online industrial monitoring.

Because of many advantages of LIBS, it has been applied to a number of analytical domains, for example, various alloys [5, 6, 7], slags [8, 9], soil [10], rocks [11, 12] and isotopes [13]. We searched for all scientific papers from 1963 to 2016 on Web of Science with laser-induced breakdown spectroscopy (LIBS). The statistical results are presented in Figure 2. It can be seen that LIBS has developed rapidly since 1990.

Figure 2.

The number of articles published in the Web of Science search by laser-induced breakdown spectroscopy (LIBS) in 1963–2016.


2. Pretreatment of samples and spectra

2.1. Sample pretreatment

One of the most widely cited advantages of LIBS is that it does not require sample preparation, but this may also be the biggest limitation for improving its consistancy. In general, LIBS performance may be enhanced using two main approaches: pretreatment of samples and spectra. Many homogeneous solid samples require no sample preparation, for example, glass, alloy and plastic. For powder samples (e.g., cement [14], soil [10] and coal [15]) which can press the cake directly, it must be consistent with the standard sample preparation process used for calibration during the pressing process. Comparing with solid samples, the direct analysis of liquid samples by using LIBS has many disadvantages: splash, less excitation and fluctuation of liquid level. The simplest way to change a liquid sample into a solid sample is to freeze it [16, 17]. Sobral et al. [18] investigated the detection sensitivity of Cu, Mg, Pb, Hg, Cd, Cr and Fe traces in water and ice samples under the same experimental conditions by using LIBS. Another effective way that can be used for liquid analysis in a solid matrix configuration consists of using an absorbent substrate, for example, plant fiber spunlace nonwoven [19], absorbent paper [20], thin wood sample [21, 22] and membrane-based filter paper [23]. Now LIBS analysis of aerosols is mainly of two categories: direct analysis and enrichment. However, the detection limit and the statistical results of the direct analysis are still relatively poor. On the other hand, the substrate-based collection does not provide as instantaneous information and does allow one to achieve lower detection limits by increasing the sample flow rates and sampling times.

2.2. Correction and removal of continuum background

The detected plasma emission spectrum at a given wavelength in a spectrum is the sum of the analyte signal and the continuum background. The analyte signal is often overwhelmed by the continuum background, which interferes the true intensity of signal and compromises spectral clarity and hence reduces the accuracy of quantitative analysis. Zou et al. [24] developed a modified algorithm of background removal based on wavelet transform for spectrum correction and applied to low-alloy steel samples. This method can effectively improve the quality of the signals and the accuracy of the regression model. Sun et al. [25] presented a method that can automatically estimate and correct varying continuum background emission. Simulations and experiments were made to successfully prove the efficiency of the method. The proposed method scarcely needs people’s intervention and can automatically and flexibly estimate varying continuum backgrounds over a very wide spectrum range. Another way to deduct a continuous background is to add a polarizer to the collected light path. Penczak et al.’s [26] research results show that the continuous background of the Al plasma emission spectra induced by 800 nm femtosecond pulse laser is strongly polarized. The use of a polarizer can effectively filter the continuous spectrum, thus improving the signal-to-noise ratio and the signal-to-back ratio of the characteristic spectrum.

2.3. Spectral normalization

In order to increase the stability of the signal, the analyte signal intensity can be normalized using a parameter representative of the actual plasma conditions. In general, there are three main standardized methods [27]: (1) normalization by using the intensity of an internal standard line; (2) normalization by using a reference signal; and (3) compensation for the plasma conditions. Castro et al. [28] used 12 different types of data normalization to reduce the interference matrix and to improve the calibration models. Their findings show that the application of normalization modes was useful to compensate for the differences among sample matrices. Models without normalization presented two- to fivefold higher errors. Karki et al. [29] studied the analytical performance of six different spectrum normalization techniques, namely internal normalization, normalization with total light, normalization with background along with their three-point smoothing methods for quantification of Cr, Mn and Ni in stainless steel. The final results show the superiority of internal normalization technique over normalization with total light and normalization with background techniques irrespective of whether it is Cr, Ni or Mn analysis. Wang et al. presented three spectrum standardized methods in order to improve the reproducibility of LIBS measurements which are named the spectrum standardization approach [30], the sampled spectrum standardization approach [31] and the multivariate spectrum standardization method, respectively. In spectral standardization, a particular example is the use of acoustic signals [32, 33] or laser-induced plasma image [34].

2.4. Automatic identification of emission lines

LIBS can excite all the elements of the spectrum of the sample, so reliable and fast identification of emission lines in laser ablation of multicomponent samples is crucial. Labutin et al. [35] applied an algorithm to automatically identify emission lines in LIBS. The algorithm is implemented by three parts: simulation of the set of spectra corresponding to different temperature and electron density, searching the best correlated pair of a model spectrum and an experimental one, and attributing the peaks with certain lines. Ukwatta et al. [36] consider the problem of element detection as a multilabel classification problem, using support vector machines (SVMs) and artificial neural networks (ANNs) for multielement classification. The proposed algorithm is evaluated by using the LIBS image obtained from the experiment. The accuracy of the machine learning method to identify the elements correctly can reach 99%. Mateo et al. [37] developed the software package SALIPS, which can quickly and semiautomatically identify the spectrum peak and give the element composition of the analytical sample. The software package simulates the spectrum by using the relative intensity of the atomic line in the NIST database. In order to facilitate visual comparison, it can present both the simulated and experimental spectra on the same plot.


3. Calibration methods of LIBS

A number of calibration methods have been applied to various research fields and physical state samples of LIBS quantitative analysis. We cannot involve all published research articles, and only a few of the most commonly used calibration methods are reviewed.

3.1. Univariate analysis

The fitting area intensity Icorresponding to the transition between lower level Eland upper level Euof an atomic species αcan be expressed as:


where Fis the experimental parameter, NαIis the atomic number density, Aulis the transition probability, guis the upper-level degeneracy and UαITis the partition function at the temperature T. For the same sample, if the temperature and density of each laser-induced plasma are constant, then Iis proportional to the elemental concentration C. If there are a series of samples with different C, one can establish a calibration line between spectral intensity and element concentration.


where b0and b1are model parameters, eiis the random error and iis the number of samples. The parameter Ĉiis the estimate value of Ci, namely




In the regression analysis, the best estimate of b0and b1is obtained based on a set of Iand C, which makes the Ĉand Cto the nearest degree. For example, Bhatt et al. [38] choose Ce II 413.38, 418.65, and 439.16 nm to establish a univariate linear calibration curve, as shown in Figure 3.

Figure 3.

Simple linear regression calibration curves for Ce [38].

Correlation coefficient R, also called the Pearson coefficient, is often used to denote the correlation between Iand C, which is defined as:


Correlation coefficient R1, and it is closer to 1, indicating the better relevance. Most of the LIBS papers report R2which provides fast information about the correlation of the data and consequently a fast first knowledge about the prediction ability of the model since poor correlation necessarily implies poor predictive ability. However, it should be noticed that a model with a value of R2close to 1 may indeed have a poor accuracy for prediction [39].

Precision is described by the standard deviation (SD), the relative standard deviation (RSD in %) and the root-mean-square error (RMSE), which can be expressed as:


In order to describe the lower limits of a quantitative model, the limit of detection (LOD) can be calculated by the following equation:


where σis the standard deviation of the background and C/Iis the reciprocal of the slope of the calibration curve. The calculated values of LOD for different elements are presented in Table 1.

ElementsLOD (%)

Table 1.

Limit of detection (LOD) estimated for different elements.

3.2. Multivariate analysis

3.2.1. Multiple linear regression

For LIBS, the line of an element is not one. If there are mvariables and nsamples, then


where Iis the intensity of different spectral lines from the same element, eiis residual, Qis the sum of squares of the residuals and Ĉiis the estimate value of Ci. One can get b0, b1, …,bmwhen Qvalue achieves the minimum value.

For example, Chen et al. [40] used the multiple linear regression method to quantitatively analyze chromium in potatoes. The characteristic line of Cr can be considered that the concentration of Cr (Ci) has a relationship with the intensity of Cr and/or the other corresponding elements. They normalize the quantitative analysis of Cr by considering the influence of the Ca matrix. Four independent variables (ICr, ICr, ICaand ICa) are used to test the performance of different linear regression methods, where ICris the intensity of Cr I 425.43 nm, ICris the sum of three Cr lines (Cr I:425.43, Cr I 427.48 and Cr I 428.97 nm), ICais the intensity of Ca I 431.86 nm and ICais the sum of five Ca lines (Ca I 422.67, Ca I 428.30, Ca I 430.25, Ca I 430.77, and Ca I 431.86 nm). Different combinations of the four independent variables were selected for unary, binary, ternary and quaternary linear regression analyses. The results of quantitative analysis of Cr element by the linear regression method with different variables are indicated in Table 2.

Calibrate methodInput variablesR2Predicted value (μg/g)Relative error (%)

Table 2.

Quantitative results of Cr by different linear regression methods.

3.2.2. Principal component regression

In the LIBS quantitative analysis, the calculated concentration is affected by the lines of objective elements and other elements. In the study of empirical questions, in order to analyze the problem comprehensively and systematically, one must consider many spectral lines of many elements. Because each line reflects the information of the element concentration in varying degrees, and the lines have a certain correlation with each other, the information reflected in the calculation overlaps to some extent. In the study of multivariate problems by statistical methods, too many variables will increase the amount of calculation and increase the complexity of the problem. It is hoped that the variables involved in the process of quantitative analysis are less and the amount of information is more. Principal component analysis (PCA) is adapted to this requirement and is an ideal tool for solving such problems. Principal component analysis can transform a set of variables that may have correlation into a set of linearly uncorrelated variables by orthogonal transformation. The variable after the conversion is called the principal component. Principal component regression (PCR) is a regression analysis method for analyzing multiple regression and is based on PCA. In general, predicting the concentration by PCR can be divided into three steps: first, the PCA is performed on the data (spectral) matrix of the original independent variables, and the appropriate number of principal components is selected by finding the eigenvalue, eigenvector, variance contribution rate and cumulative contribution rate. Second, the selected principal component is analyzed by the ordinary least-squares method. Finally, the strongest possible correlations between the orthogonal PC scores and elemental composition are established. When selecting principal components by PCR, only the independent variables are taken into account, and the dependent variables are ignored. It can reduce the dimension of variables and address the problem of multiple collinearity but cannot distinguish noise when there is a lot of noise in the independent variable (signal) and lose some information of the original variables, so a better regression model will not be obtained.

Death et al. [41] applied PCR to determine the elemental composition of a series of run-of-mine (ROM) iron ore samples. LIBS spectral data were recorded in three separate spectral regions (250 nm, 400 nm and 750 nm) to measure major, minor and trace components of the iron ore sample pellets. Background stripping, normalization and spectral cleaning were applied to minimize RSD of the LIBS data. PCR analysis was used to produce calibration models of Fe, Al, Si, Mn, K and P. Independent LIBS measurement data are used to verify these calibration models. The model R2for Fe, Al, Si and K is 0.99, 0.98, 0.99 and 0.84, respectively. As an example, PCR calibration model of Fe is shown in Figure 4 [41].

Figure 4.

PCR calibration model determined for iron using the 250-nm LIBS data.

3.2.3. Partial least squares regression

The main purpose of PCR is to extract relevant information hidden in the spectral line and then used to predict the concentration. This approach allows one to use only those independent variables, and the noise will be eliminated, so as to improve the quality of the predictive model. However, PCR still has some defects, and some useful variables whose correlation is very small are easily missed when the principal component is selected. If we choose for each component, it is too difficult. Partial least squares regression (PLSR) is a new multivariate statistical data analysis method. It mainly studies the regression modeling of multiple dependent variables to multiple independent variables. PLSR is more effective, especially when the variables are highly linearly correlated. In addition, PLSR solves the problem that the number of samples is less than the number of variables. Partial least squares (PLS) is the advantage of three analytical methods, which are PCA, canonical correlation analysis and multiple linear regression analysis. Both PLS and PCA try to extract the maximum information reflecting the data variation, but PCA only considers one independent variable matrix, while PLS has a response matrix, so it has predictive function. PLS avoids potential problems such as nonnormal distribution of data, factor indeterminacy and unidentifiable models. PLS has two types (PLS-1 and PLS-2), and PLS-1 corresponds to the case where there is only one dependent variable. PLS-2 corresponds to the case where there are several dependent variables. Although PLSR is more complex than PCR, and the tendency of overfitting is stronger, better results can be obtained by using PLS to analyze low-precision data or high-complexity systems.

The input variable of PLS can be characteristic spectral lines [42], partial spectral region [43] or full spectrum [44]. Amador-Hernandez et al. [45] used PLS-1 to quantify gold and silver gold and silver in Au-Ag-Cu alloys. The influence of spectral region (266–340 nm, 266–269/326–340 nm and 269–313 nm), laser energy (3 mJ, 8 mJ), background correction and integration time on the quantitative analysis of PLS was studied, respectively (Table 3).

Region IRegion IIRegion III
8 mJ3 mJ8 mJ3 mJ8 mJ3 mJ
Integrated spectraWBC3.473.273.272.74 (2)2.72 (2)2.63 (2)
BC1.972.272.562.272.58 (2)2.09 (2)
Time-resolved spectraWBC3.112.2732.022.73 (2)3.96 (2)
BC3.072.323.132.232.47 (2)3.73 (2)

Table 3.

Standard error of calibration (SEP) and prediction (SEV, in italics) estimated during the determination of silver for autoscaled data.

Values in brackets correspond to number of factors if different from three.

WBC = without background correction. BC = with background correction

3.2.4. Artificial neural network

To overcome the poor precision of the calibration curve methods and the limitations of nonlinear problems, scholars have proposed the use of statistical methods for the quantitative analysis by LIBS. Artificial neural networks (ANNs) are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve performance) to do tasks by considering examples, generally without task-specific programming. The following outstanding advantages of artificial neural networks have attracted great attention in recent years: (1) it can fully approximate any complex nonlinear relationship; (2) all the quantitative or qualitative information is stored in the neurons in the network, so it has strong robustness and fault tolerance; (3) ANNS adopts the parallel distribution processing method, so that it can perform a large number of operations quickly; (4) ANNS can learn and adapt to unknown or uncertain system; and (5) it can handle both quantitative and qualitative information at the same time. An artificial neuron network (ANN) is a computational model based on the structure and functions of biological neural networks. Artificial neural networks usually contain input layer, hidden layer (competitive layer) and output layer, as shown in Figure 5.

Figure 5.

Schematic of the three-layer artificial neural network [46].

For example, El Haddad et al. [46] used an artificial neuron network to analyze the heavy metals in soil and predict the concentration of element. They used average relative error of calibration REC (%) and the average relative error of prediction REP (%) to evaluate the predictive quality of the ANN models. REC and REP were preferred to RMSE because they provide percentage instead of absolute values. Before using the artificial neural network, it is necessary to optimize the parameters such as the number of neurons and training times.


where Nc is the number of samples in the calibration and Np is the number of samples in the prediction sets, respectively (Table 4).

OutputInput elementREC (%)REP (%)
AlAl, Ca, Ba, Fe, Ti18.7 ± 0.819.3 ± 2.1
CaCa, Ba, Fe, Ti9.4 ± 0.415.2 ± 0.8
FeFe, Ba, Ca, Ti15.5 ± 0.616.8 ± 0.9

Table 4.

Average relative errors of calibration (REC) and prediction (REP).

3.2.5. Support vector regression

The neural network structure design depends on the designer’s experience and prior knowledge, while support vector machine (SVM) is based on statistical theory, which has a strict theoretical and mathematical basis. The neural network learning algorithm lacks quantitative analysis and complete theoretical support, and it also needs a lot of samples to learn. SVM is often used to pattern recognition, classification and regression analysis of small samples, nonlinear and high-dimensional data and can achieve very good results. SVM is based on the principle of structural risk minimization, which can ensure that the learning machine has a good generalization ability. SVM for regression prediction is called support vector regression (SVR). SVR also can guarantee the global optimality of the algorithm and avoid the local minimum problem that the neural network cannot solve. Therefore, when there are a small number of samples, it is better to use SVR instead of neural network. It is important to note that the selection of optimized significant penalty parameter C and the kernel parameter of RBF-δ is more sensitive for the SVR model.

For example, Gu et al. [47] used three segmental spectra of 393–397 nm, 422–423 nm and 425–427 nm as the input variables of SVR model to predict the content of Cr in soil samples. They got better predictive results that R2=0.999and the absolute relative error is 2.61% and the slope of the calibration curve is closer to 1, as shown in Figure 6.

Figure 6.

The calibration curve of Cr by the SVR model with segmental spectra input.

3.3. Calibration-free laser-induced breakdown spectroscopy

LIBS offers a strong potential for analysis in situ and in real time, not requiring complex sample preparation. This allows it to be applied quickly and extensively to qualitative analysis, but quantitative analysis is very difficult. Even with a given experimental configuration, the laser-induced breakdown spectrum is not only dependent on the concentration of the analyte but also dependent on the composition of the matrix and their polymerization state. Matrix effects play an important role in quantitative analysis of LIBS. In order to overcome the matrix effect, Ciucci et al. [48] proposed the calibration-free laser-induced breakdown spectroscopy (CF-LIBS) approach which takes the matrix into account as a part of the analytical problem. In local thermodynamic equilibrium (LTE), excited levels are populated according to the Boltzmann distribution and ionization states are populated according to the Saha-Boltzmann equilibrium equation. Each spectral line is represented as a point in a Boltzmann plane where the slope and intercept correspond to the plasma temperature and the concentration of the corresponding element, respectively (Figure 7).

Figure 7.

Boltzmann plot containing some data resulting from the analysis of an aluminum alloy. The three lines represent the results of a linear best fit of the Al(I), Mn(II) and Mg(II) data [48].

Ciucci et al. [48] proposed CF-LIBS for the first time and used it for quantitative analysis of the composition of metallic alloys and quantitative determination of the composition of the atmosphere. CF-LIBS has been applied to many samples, such as aluminum alloys, steel and iron alloys, precious alloys for jewelry, copper alloys, archeological copper artifacts, glasses, pigments on roman frescoes and on parchments, soils and rocks, meteorites, coral skeletons and human hair. However, the accuracy of CF-LIBS is still not high.


4. The comparison of calibration methods

LIBS is an analytical technique that can inspire all the elements in the sample. Univariate analysis uses only partial spectral information and suffers from the strong effects of plasma instability. More importantly, strong matrix effects prevented to apply simple calibration curves. There is no doubt that the multivariate analysis is superior to univariate analysis. This has been proven by numerous researchers, for instance, the analysis of rocks [49], rare earth elements [38], glass [50], cerium oxide [51], alloy steel [52], liquid steel [53], soil [54, 55], soybean oil [56], PZT (Lead Zirconate Titanate) ceramics [57], Pb in navel orange [58], Marcellus Shale [59], tailing cores [60], geologically diverse samples [49], steel melt [61], slurry [62], iron ore [63] and pellets of plant materials [64].

Many multivariate analysis methods have been applied to the quantitative analysis of LIBS, especially chemometric. Generally, the most common chemometric technique applied to concentration measurement by LIBS is PLS. It has been applied to many fields of analysis, such as soil [55, 65, 66], steel [67, 68, 69], glass [50], rock [70], iron ore [63] and coal [71, 72]. The rest of the analysis methods are PCR [50, 73, 74], LASSO [75, 76], kNN [77], ANN [78, 79, 80, 81], SVM [82, 83] and so on. PLS has been implemented either to calculate the concentrations of a single element (PLS-1) or to simultaneously calculate the concentrations of more than one element (PLS-2). In addition to PLS, other linear (MLR [84, 85], PCR [50, 73] and LASSO [75, 76, 77]) and nonlinear regression methods (ANN [86, 87]) have been applied to LIBS quantitative analysis. In order to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples, Boucher et al. [77] had studied nine kinds of linear and nonlinear regression methods. The advantages and disadvantages of various methods are introduced, as shown in Table 5. The final results show that nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. The performance of different models for different oxides is different. At present, multivariate analysis, especially PLS, is our best choice. But it should be pointed out that multivariate quantitative analyses present a high risk of overfitting.

PLSUsed when Xhas many collinear features and when p > > N. Provides a stable multivariate model that can account for all oxides (PLS-2).Provides a complex model in which all coefficients are linear combinations of the original channels. Involves a complex optimization problem with no simple, closed-form representation.Linear, uses all channels (not sparse)
LASSOProvides an interpretable model, selects subset of predictors with the strongest effects on the response variable. Can be used for feature selection when less data are available.Arbitrarily chooses one covariate from a group of highly collinear covariates to use in the model and discards the rest.Linear, sparse, eliminates noisy channels
Elastic netPerforms well in the p > > N case. Provides an interpretable model that is more stable than the lasso. Useful for feature selection.Cannot be used for feature selection in situations when less data are available because it overwhelms the data with too many model variables.Linear, sparse, eliminates noisy channels
PCRDecorrelates the data and reduces its dimensionality, combating the “curse of dimensionality”Higher-order polynomial kernels tend to overfit the training set and poorly predict the testing set in this application.May be linear or nonlinear; both use all channels
SVRPerforms well with a linear kernel. Can be either linear or nonlinear depending on the kernel.As above, polynomial kernels tend to overfit the training set and poorly predict the testing set in this application.May be linear or nonlinear; either uses all channels
kNNRequires no model training other than choosing the number of neighbors, reducing run time and making it scale well to large data sets.Tends to overfit the training data and is only as effective as the distance metric used to compare samples.Nonlinear, uses all channels

Table 5.

Comparison of various regression methods.

CF-LIBS can overcome the influence of matrix effect, but the poor analysis accuracy has been the fatal shortcoming of CF-LIBS. This is mainly due to the fact that the laser-induced plasma is a very complex object and its realistic description is not attainable with simple mathematical models [88]. A number of researchers have made some modifications to the CF-LIBS algorithm, such as self-absorption [89, 90]. In recent years, several research groups [91, 92, 93, 94] began using standard samples to improve the accuracy of the nonstandard analysis. Cavalcanti et al. [92] presented and used one-point-calibration CF-LIBS to analyze a set of copper-based samples. The results show that the new method achieves similar or even higher accuracy than the calibration curve.



This work was supported by the National Natural Science Foundation of China (Grant nos. 11075184 and 61505223) and the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant no. Y03RC21124).

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Hongbo Fu, Junwei Jia, Huadong Wang, Zhibo Ni and Fengzhong Dong (December 20th 2017). Calibration Methods of Laser-Induced Breakdown Spectroscopy, Calibration and Validation of Analytical Methods - A Sampling of Current Approaches, Mark T. Stauffer, IntechOpen, DOI: 10.5772/intechopen.72888. Available from:

chapter statistics

1393total chapter downloads

2Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Linearity of Calibration Curves for Analytical Methods: A Review of Criteria for Assessment of Method Reliability

By Seyed Mojtaba Moosavi and Sussan Ghassabian

Related Book

Applications of Molecular Spectroscopy to Current Research in the Chemical and Biological Sciences

Edited by Mark Stauffer

First chapter

Fourier Transform Infrared and Raman Characterization of Silica-Based Materials

By Larissa Brentano Capeletti and João Henrique Zimnoch

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us