Procedures of Food Quality Control: Analysis Methods, Sampling and Sample Pretreatment

The development of new methods that might or might not involve sophisticated techniques for the quality control of food products continues increasing according to consumer requirement of food safety and authenticity issues. The production of fakes is a worldwide problem that includes food products. Some examples can be found on olive oils, honeys and alcoholic beverages such as table wines and spirits (Contreras et al., 2010; Gallardo et al, 2009; Marcos et al., 2002; Zhang et al., 2001; Zhu et al., 2010). Therefore, food authentication concerns both authorities and food processors to avoid the unfair competition from counterfeiters who exploited the economic advantage by producing and selling fake food products. Food companies have adopted different strategies to improve the positioning of their brands on the market. These strategies not only include the food quality control based on the identification and reduction of forbidden compounds but also on monitoring the key compounds that enhance the food value. In this chapter a practical review of methods for quality control on compounds and on the confirmation of food authenticity is presented. Non-destructive testing sample methodologies that have found new applications in the field of quality assurance are discussed in terms of their potential use in food industry. Thus, the use of spectroscopic techniques with chemometric analysis for the classification of food samples based on quality attributes is also discussed in this chapter. The optimization of the analytical methods is based on the selection of fundamental working conditions (sampling, pretreatment, selectivity, linearity, range determination, sensitivity, limit of detection, precision and accuracy, etc.) according to which the suitability of a selected method might be evaluated. One of the most important steps in the analytical method for compound determination in biological matrices is the sample pretreatment. The purpose of a pretreatment is to release analytes from compounds and phases as well as to remove components which have adverse impacts on the analytical signal. On the other hand, before any analytic treatment it is necessary to take into account some preliminary procedures that involve statistic analysis (Cochran, 1977; Hinkelmann &


Introduction
The development of new methods that might or might not involve sophisticated techniques for the quality control of food products continues increasing according to consumer requirement of food safety and authenticity issues.The production of fakes is a worldwide problem that includes food products.Some examples can be found on olive oils, honeys and alcoholic beverages such as table wines and spirits (Contreras et al., 2010;Gallardo et al, 2009;Marcos et al., 2002;Zhang et al., 2001;Zhu et al., 2010).Therefore, food authentication concerns both authorities and food processors to avoid the unfair competition from counterfeiters who exploited the economic advantage by producing and selling fake food products.Food companies have adopted different strategies to improve the positioning of their brands on the market.These strategies not only include the food quality control based on the identification and reduction of forbidden compounds but also on monitoring the key compounds that enhance the food value.In this chapter a practical review of methods for quality control on compounds and on the confirmation of food authenticity is presented.Non-destructive testing sample methodologies that have found new applications in the field of quality assurance are discussed in terms of their potential use in food industry.Thus, the use of spectroscopic techniques with chemometric analysis for the classification of food samples based on quality attributes is also discussed in this chapter.The optimization of the analytical methods is based on the selection of fundamental working conditions (sampling, pretreatment, selectivity, linearity, range determination, sensitivity, limit of detection, precision and accuracy, etc.) according to which the suitability of a selected method might be evaluated.One of the most important steps in the analytical method for compound determination in biological matrices is the sample pretreatment.The purpose of a pretreatment is to release analytes from compounds and phases as well as to remove components which have adverse impacts on the analytical signal.On the other hand, before any analytic treatment it is necessary to take into account some preliminary procedures that involve statistic analysis (Cochran, 1977;Hinkelmann & Kempthorne, 1994).When making surveys within different research fields, the access to the entire population of items/subjects/individuals/samples is almost impossible and expensive and thus, sampling techniques come into play.Industry and business make use of sampling techniques to increase the efficiency of their internal operations.In particular, for industry sampling techniques are important tools for quality control to ensure that final items are ready for marketing.In particular, for food processing decisions regarding any change on the quality control to accept or reject lots would be well grounded if the selected data from the population is done with the appropriate sampling technique.In this way you can be confident that your decision about acceptance or rejection of lots is right.That is, sampling procedures are focused on the selection of a subset of sample observations within a population attempted to yield some knowledge about the entire population of concern.Sample observations bring out data values of some measurable properties of the subject.Therefore, the cost of collecting data about a population by gathering information from a subset instead of the entire population is reduced.In addition to this, making use of the appropriate sampling technique the data can be collected and summarized more quickly since the data set is smaller and the accuracy and quality of the data can be improved.Therefore, this chapter starts with the sampling procedures followed by the most common analytic methods focused on food quality control.

Sampling procedures
If the population regarding any survey is almost impossible to reach and expensive to try every single sample, then how large should the subset of samples be to ensure that the results obtained from it would be generalized?A single answer to this question is not possible.However, it is clear that the larger the sample size the more closely your sample data will match that from the entire population.For sampling procedures one counts with the probability sampling and non-probability sampling procedures.The probability sampling procedure refers to a given population where one is able to identify distinct samples; each possible sample has some known probability of selection and thus, one can select, and identify, any sample through a random process.That is, each sample within the population has its own probability to be randomly selected.In contrast, the nonprobability sampling procedure refers to the case when the sample to be selected is part of the population that is readily accessible or it might occur that the population consists of volunteers or that the samples were haphazardly selected.In any case, for this procedure some samples have no chance of selection and information about the relationship between samples and the entire population is very limited.For the non-probability sampling no known probability is assigned to the samples of the population and it will almost certainly contain sampling biases.When the sample is selected on a random basis 3 main types of random sampling techniques can be identified, i.e., the simple random sampling, stratified random sampling and systematic sampling.For the simple random sampling method, any sample from the population has an equal chance of being drawn, that is, every sample has the same probability of being selected.If the selected sample is removed from the population for subsequent draws, then the method is random sampling without replacement.However, if the selected sample is not removed from the population in such way that for any new draw the population remains fixed to the original, then the simple random sampling is without replacement.
The stratified random sampling method is based on the fact that sometimes the entire population might embrace certain number of distinct samples which would be divided into subpopulations.If this is the case, these subpopulations are called strata and the sum of them is the total population.For stratified random sampling each stratum is sampled independently and each individual in the subpopulation is randomly selected, i.e., a simple random sample is taken in each stratum.For this method, each subpopulation can be considered as independent and thus each stratum is a population in its own and one is able to get inferences from each of them.When the population can be ordered in some scheme and the samples are drawn at regular intervals through that ordered list, then the sampling method is called systematic sampling.For this method and to start the sampling is convenient to select the first sample randomly and then drawn the next samples at regular fixed intervals till the end of the list.That is, the first element to draw should be selected randomly instead of selecting the first of the list.Notice that for this method the complete list defines a stratum and each regular interval defines a stratum that contains the fixed number of elements defined by the interval.Thus, the difference between this method and the stratified is that the former is defined by regular strata and each individual occurs at the same relative position within the stratum.On the other hand, for the later the strata might not be regular and the individuals are selected randomly in each stratum.

Normal distribution
By doing a survey with the appropriate sampling method one ends up with a collection of data that represents the measurement of one or more properties of the chosen samples from the given population.To make predictions based on statistical inference it is convenient to make use of the probability functions.The normal distribution is a continuous probability function and it is considered one of the most important in the field of statistics.This probability distribution is used in different fields such as statistics, natural sciences, medical sciences, business and social sciences.Some of its advantages come from the fact that the random variables get around a single mean value and a large number of results can be derived in explicit form.The normal distribution is also called Gaussian distribution and because its plot is a curved flaring shape, it is recognized as a bell curve.Therefore, the simplest case for a normal distribution is where x is the random variable.This expression corresponds to the so called standard normal distribution.The shape of this expression is a "bell curve" where the coefficient (1/√2π) ensures that the total area under the curve is equal to one; meanwhile the coefficient ½ in the exponent makes the width of the bell curve also equal to one.If µ is the mean value and σ 2 the variance of a population, the above equation can be expressed in a more general form as The parameter µ localizes the peak of the bell curve while the root square of σ 2 is the standard deviation, σ, which defines the width of the distribution.This expression is called probability density function and is represented in Figs. 1 and 2. In Fig. 1 the mean value µ and variance σ 2 have different values to see how the shape of f(x) changes; notice that by making µ=0 and σ 2 =1 one obtains the standard normal distribution.µ and σ 2 are two important parameters in sampling, probability theory and statistics.Most of the time µ is referred as the mean of a set of values and corresponds to the arithmetic mean, that is, where N is the total number of values {x i }.In statistics µ is the expected value of a random variable and when sampling a statistical population the used term for mean is the sample mean.The variance measures how far a set of samples are spread out from each other and it is defined as,

(
) For a discrete distribution P(X) with N possible values of x i , the variance is and for a continuous distribution,  Fig. 2. The normal probability density function with standard deviations.

Accuracy, precision and errors
When doing measurements of some property of individuals, objects or samples the terms accuracy, precision and errors show up since they are intrinsic in doing any measure.Thus, the accuracy of a measurement or of an experiment is related to how closely the experimental results agree with a true or accepted value.Meanwhile precision is related to how reproducible the measurement or experiment is.To illustrate these two concepts, consider a set of data, the set of random variable {x i }, which comes from the measurement of some property of an object.This set of data is distributed normally according to Eq. ( 2).The true or reference value might be on the left or right of the mean.Thus, the mean represents the peak of the distributed data and the standard deviation tells us how the data spreads around the mean.Meanwhile accuracy is the separation between the mean and the true value and, since each measurement might not agree well with each other, precision is expressed by quoting the standard deviation.Systematic and random errors can easily be understood by considering two sets of data, related to two measurements, each defined by normal distribution.One set of data is reported with poor accuracy but very precise and the other with good accuracy but poor precision.For the first case, the difference between the mean and the true value is large; meanwhile the bell curve of the distribution is very slender, see for example the case for µ=0 and σ 2 =0.2 in Fig. 1.That is, each measurement is very similar to each other but all measurements are not close to match the true value and all have a large error.This kind of error is the systematic error and it happens exactly in the same manner every time a new measurement is performed.For the second case, the peak of the bell curve, i.e. the mean, is very close to the true value but the shape of the bell curve is very broad, see for example the case for µ=0 and σ 2 =4 in Fig. 1.In other words, the measurements as a whole are correct but each measurement by itself is a poor measure of the true value.This is the case for random errors which change each time the measurement is repeated.

Methods based on chromatographic techniques
Chromatographic techniques have been widely used because they offer good information about sample composition.Their principal advantages are separation efficiency; identification of almost any type of molecule present in a food sample; and depending of which technique is employed, it is possible to obtain very low detection limits for a wide range of analytes.Liquid chromatography, in particular HPLC (high performance liquid chromatography), can detect compounds such as aldehydes, proteins, amino acids, phenolic compounds and carbohydrates (Ball 1990;De Zan et al., 2009;Francisco & Resurreccion 2009;Rubio Barroso et al., 2006;Schultheiss et al., 2000;Thoma et al., 2006).On the other hand, GC (gas chromatography) is more suited to the analysis of volatile or semi-volatile compounds; some substances must be extracted from the food sample, e.g., fatty acids from triglycerides; but others, such as alcohols, can be injected directly into the column (Petrovic et al., 2010;Wang et al., 2004).The most commonly detectors used for chromatographic techniques involve UV-Vis (ultraviolet-visible), fluorescence, electrochemical, MS (mass spectroscopy), FID (flame ionization detector), ECD (electron capture detector), electronic nose, etc.The methods developed through these techniques have established patterns of composition of several types of samples by monitoring the specific compounds.The principal disadvantage of chromatographic techniques is the complex operation which requires special training to ensure that the technique is performed correctly.False results are obtained when the laboratory equipment involved with the technique is not operated correctly.Besides, long analysis times are required.

Gas chromatography
Basically, chromatograph equipment consists of four systems: gas supply, the sampling, the column and the detector system.The gas supply system provides, depending on the kind of detector that is chosen, the necessary type of gas or gas mixture.The sampling system usually contains an automatic injector that is situated inside a thermostatically controlled enclosure and generally has its own oven, but sometimes shares the column oven for temperature control.The injector can have a complex transport system that can take samples, wash containers, prepare derivatives and, if necessary, carry out a very complex series of sample preparation procedures before injecting the sample onto the column.The column is the principal device that actually achieved the necessary separation, and has an oven to control the column temperature.Finally, for detection there is a wide range of detectors available each having unique operating parameters and its own performance characteristics.The best selection depends on the sample nature and this system also has its own oven (Holley et al., 1995).Table 1 shows an overview of GC detectors.
The analyzed substances through GC must be volatile and should be vaporized and moved through a long column by an inert carrier gas.The column is filled with a packing material covered with a not-volatile liquid.The molecules of each substance will be distributed into the gas and the liquid.The more volatile a substance is the longer it will be moving with the carrier gas, and the quicker it will emerge from the column (Otles et al., 2008).The sample pretreatment usually is complex and the compounds of importance, usually, must be extracted from the matrix but if they are not volatile then derivatization techniques must be used (Herraiz, 2000).Table 1.GC detectors more common applicable for the determination of food components

Detector
The extraction of an analyte utilizes the partitioning of a material between two phases and it is based on solubility or insolubility in various solvents.Extraction is useful for sample preparation and purification, but is also often a first step in qualitative analysis.There are several kinds of extraction process such as liquid-solid, liquid-liquid, SPE (solid phase extraction) and SPME (solid-phase microextraction).
In liquid-solid extraction, a solvent (hydrophilic or hydrophobic, acidic, neutral or basic) is added to a solid.Insoluble material can be separated by gravity or vacuum filtration, and soluble material is extracted into the solvent.A sequence of solvents, by varying polarity or pH, can be used to separate complex mixtures into groups.Then, the filtered solution can be injected into the GC system.If the two phases are immiscible liquids the technique is called liquid-liquid extraction.Usually, one phase is aqueous (hydrophilic) and the other is a hydrophobic organic solvent.A sequence of extractions with more than one solvent can be used to separate, with considerable efficiency, relatively complex mixtures.SPE uses the affinity of solutes dissolved or suspended in a liquid (known as the mobile phase) for a solid through which the sample is passed (known as the stationary phase) to separate a mixture into desired and undesired components.The result is that either the desired analytes of interest or undesired impurities in the sample are retained on the stationary phase.The portion that passes through the stationary phase is collected or discarded, depending on whether it contains the desired analytes or undesired impurities.If the retained portion on the stationary phase includes the desired analytes, they can then be removed from the stationary phase for collection in an additional step in which the stationary phase is rinsed with an appropriate eluent.
SPME involves the use of a fiber coated with an extracting phase, that can be a liquid (polymer) or a solid (sorbent), which extracts different kinds of analytes (including both volatile and non-volatile) from different kinds of media that can be in liquid or gas phase.
The amount of extracted analyte by the fiber is proportional to its concentration in the sample as long as equilibrium is reached.Then the fiber is transferred to the injection port of the gas chromatograph where desorption and a n a l y s i s o f t h e a n a l y t e s i s c a r r i e d o u t (Pawliszyn, 1997).
The most typical applications on GC for food analysis pertain to the quantitative and/or qualitative analysis of food composition, natural products, food additives, flavor and aroma components; a variety of transformation products, and contaminants, such as pesticides, fumigants, environmental pollutants, natural toxins, veterinary drugs, and packaging (Holley, 1995).
Among the natural products analyzed by GC one can highlight the analysis on olive oil samples.Recent studies are based on the determination of several kinds of analytes such as undesirable compounds, e.g.pesticides and herbicides (Dugo, 2005;Aramendia, 2007); monoaromatic volatile compounds (benzene, toluene and ethylbenzene) (Gilbert Lopez et al., 2010); beneficial health compounds such as phenolic compounds (Garcia Villalba et al., 2011); studies that involve the interpretation of the responsible compounds of oil aroma (Garcia Gonzalez & Aparicio, 2010) and so on.There are other works that analyze changes of patterns focused on comparison of their fatty acid composition through chemometric analysis (Lee et al., 1998).Chemometric analysis of the triacylglycerol and fatty acid composition is usually a media to determine their varietal and geographical authenticity (Luykx & van Ruth, 2008).GC offers good information about distillated beverages composition and its utility has been proved on the analysis of spirit beverages like tequila, vodka, cognac, ron and brandy.Through a variety of detectors such as FID, MS, electronic nose, etc., coupled to GC equipment it has been possible to establish patterns of composition and concentration not only for white but for rested alcoholic beverages (aged in barrels) too.For these beverages aldehydes, phenols, terpenes, higher alcohols, volatile organic acids, esters, fatty acids, etc are monitored (Campo el at., 2007;Cardeal & Marriot, 2009;Peña Alvarez et al., 2006;Plutowska & Wardencki, 2008;Wang et al., 2004).
Other studies have been successfully applied to analyze and authenticate other food products.For example, coffee has been studied to detect dichloromethane, methylimidazole, adulterations of ground roasted coffee with roasted barley; other properties of coffee as fingerprint of coffee flavor and discrimination among several varieties has been done with the help of chemometric analysis (Casal et al., 2002;Hovell et al., 2010;Huang et al., 2007;Oliveira et al., 2009;Russo et al., 1989).Detection of pesticides, vitamins, sugars and organic acids on vegetables, fruits and fruit juices has been reported (Barden et al., 1997;Gonzalez et al., 2008;Pereira et al., 2006;Sanz & Martinez Castro, 2004;Wegener & Lopez Sanchez, 2010;Xiao et al., 2006;Yang et al., 2011).GC was used in conjunction with LDA (linear discriminant analysis) and PLS (partial least squares) to differentiate apple juices (Reid et al., 2004).The adulteration of strawberry puree with different levels of apple puree was detected using SPME-GC and PLS analysis at levels of 25% (Reid, O'Donnell, Downey, 2004).Sulfur compounds, fatty acids, flavor compounds and organochlorine components on milk (Luna & Juarez, 2005;Vazquez et al., 2005;Vazquez Landaverde et al., 2006;Vetter et al., 2000).Table 2 shows the typical conditions and applications when using GC for food analysis.

HPLC
HPLC is a separation technique that involves the injection of a small volume of liquid sample into a tube packed with tiny particles (3 to 5 μm in diameter).Individual components of the sample are moved down the packed tube (column) with a liquid (mobile phase) forced through the column by high pressure delivered by a pump.For the case of LC (liquid chromatography) samples are transported along the column by a liquid moved by gravity.In principle, LC and HPLC work in similar way but HPLC has some advantages over LC such as speed, efficiency, sensitivity and ease of operation, and they make it vastly superior to LC.The compounds in a sample are separated one from the other by the column packing that involves various chemical and/or physical interactions among their molecules and the packing particles.These separated components are identified at the exit of the column by a flow-through device (detector) that measures them.The detector device can be based on several techniques such as a spectrophotometric, fluorescence or electrochemical detection.The output from the detector is called a chromatogram.The HPLC equipment is constituted by four principal components, i.e. the pump, the injector, the column and the detector systems.Pump.The pump system forces a liquid (mobile phase) through the column at a specific flow rate and normal flow rates in HPLC are in the range of 1-to 2-mL/min.Typical pumps can reach pressures in the range of 6000-9000 psi (400-to 600-bar).During the chromatographic experiment a pump can deliver a constant mobile phase composition (isocratic) or an increasing mobile phase composition (gradient).
Injector.The injector serves to introduce the liquid sample into the flow stream of the mobile phase.Typical sample volumes are 5-to 20-microliters (μL) and the injector must also be able to withstand the high pressures of the liquid system.When the user has many samples to analyze or when manual injection is not practical then the use of an autosampler is recommended (this is an automatic version).Column.The column system is considered the "heart of the chromatograph".The column's stationary phase separates the sample components of interest using various physical and chemical parameters.The pump must push hard to move the mobile phase through the column and this resistance causes a high pressure within the chromatograph.It is important to take into account that the correct selection of the column packing and the mobile phase are the most important factors in making the best performance with HPLC.Detector.The detector can determine individual molecules that come out (elute) from the column.A detector serves to measure the amount of those molecules so that the chemist can quantitatively analyze the sample components.The detector provides an output to a recorder called chromatogram.To separate most compounds there are four major separation modes, i.e.RPC (Reversed-Phase Chromatography), Normal Phase or Adsorption Chromatography, Ion Exchange Chromatography and SEC (Size Exclusion Chromatography).For RPC the column packing is non-polar, e.g.C18, C8, C3, phenyl, etc. and the mobile phase is water (buffer) + water-miscible organic solvent, methanol and acetonitrile for example.RPC is by far the most popular mode (over 90%) and can be used for non-polar, polar, ionizable and ionic molecules.All this makes to RPC very versatile for samples containing a wide range of compounds; gradient elution is often used.One begins with a predominantly water-based mobile phase and then adds organic solvent as a function of time.The organic solvent increases the solvent strength and elutes compounds that are very strongly retained on the RPC packing.
The Normal Phase technique or Adsorption Chromatography is useful for water-sensitive compounds, geometric isomers, cis-trans isomers, class separations, and chiral compounds.The column packing is polar (e.g.silica gel, cyanopropyl-bonded, amino-bonded, etc.) and the mobile phase is non-polar (e.g.hexane, iso-octane, methylene chloride, ethyl acetate), normal phase separations are performed less than 10% of the time.Ion Exchange Chromatography, in this mode the column packing contains ionic groups, e.g.sulfonic, tetraalkylammonium and the mobile phase is an aqueous buffer, for example phosphate or formate.Ion exchange is used for about 20% of the cases and it is well suited for the separation of inorganic and organic anions and cations in aqueous solution, Ionic dyes, amino acids, and proteins.In fact any salt compounds in brine water.In Size Exclusion Chromatography there is no interaction between the sample compounds and the column packing material.Instead, molecules diffuse into pores within a porous medium and depending on their size (relative to the pore size) molecules can be separated.Thus, molecules larger than the pore opening do not diffuse while those smaller do and get separated.Large molecules elute first while smaller molecules elute later.The SEC technique is used by 10-15% of the cases, mainly for polymer characterization and for proteins.Modern agriculture and food processing often involve the use of chemicals.Some of these chemicals and their functions are listed below: Carbohydrates: act as food binders Such chemicals improve productivity and thus increase competitiveness and profit margins.However, if the amounts in the final product exceed certain limits some of these chemicals may prove harmful to humans.Therefore it is very important to have control of the right amounts of chemicals and on important technique is HPLC.The number of applications of this technique in food quality control is in continuous growth, and Table 3 shows some applications.On the other hand, Fig. 3 shows the polarity and volatility characteristics of the most common compounds in food products.This figure gives a major idea about which technique is better between GC or HPLC for several kinds of compounds and which type of column or separation mode is the best choice according of the compound polarity.It is possible to find several HPLC applications for many kinds of compounds in food products.For example the determination of vitamins B 6 , K, B 2 , D 3 , and C in honey, animal products and in a wide range of fortified food products like fruit juices, in fruits directly and milk (Breithaupt, 2001;Ciulu et al., 2011;Fontannaz et al., 2006;Kall, 2003;Romeu Nadal et al., 2006;Salo et al., 2000).Analysis of the whey protein b-lactoglobulin has enabled detection of the adulteration of ovine and caprine cheese with bovine milk at levels as low as 2% v/v, and of caprine milk with bovine milk (Chen et al., 2004).Several types of distilled beverages have been analyzed through HPLC such as tequila, rum, cognac, whisky and vodka for the determination of aldehydes (2-furaldehyde, Table 3. HPLC conditions for most common compounds analyzed in food www.intechopen.com5-hydroxymethyl-2-furaldehyde and furfural), amines, carbohydrates and toxic compounds like arsenic.However, there is poor information concerning methods for the identification of adulterate distilled beverages, see Sec. 4.3 below.Some authors have reported methods for determination of aging markers in tequila such as phenolic compounds (gallic acid, vanillin, syringaldehyde, sinapinaldehyde, so on) and through these compounds is possible to recognize adulterate tequilas (Alcazar et al., 2006;Coelho et al., 2005;Munoz Munoz et al., 2008;Soga, 2002;Vidal Carou et al., 2003).

Fig. 3. Comparison of the physical and chemical properties of the main analytes of food products
Wines from different areas of origin in Canary Islands were correctly classified at levels as high as 100% using HPLC analysis.The study was on selected polyphenol compounds combined with PCA and LDA (Rodriguez Delgado et al., 2002).Spanish table wines were correctly differentiated at levels of 83-86% using PCA and LDA of the HPLC data obtained for selected biogenic amine compounds in the samples (Romero et al., 2002).On the other hand, HPLC analysis of the triglyceride and tocopherol composition of coffee samples was combined with PCA and LDA to differentiate coffee samples on the basis of variety (Gonzalez et al., 2001).Another important food product that has been the subject of analysis by several techniques is the olive oil.The analysis of several phenolic acids is carried out by HPLC where the overlapped peaks are resolved by using chemometric tools such as MCR-ALS and PARAFAC2 (parallel factor analysis) (Marini et al., 2001).Other works are focused on the pre-treatment, Tasioula-Margari et.al. propose the simultaneous extraction and HPLC determination of phenols and tocopherols in virgin oil and an average recovery of 80% was achieved (Tasioula Margari & Okoger, 2001).Adulteration of olive oil with soybean oil can be detected by methods of triacylglycerol evaluation with HPLC-APCI-MS-MS (Fasciotti & Pereira Natto, 2010).

Electronic nose
Electronic nose technology is based on chemical gas-sensor array technology, i.e. the detection by an array of semi-selective gas sensors of the volatile compounds present in the headspace of a food sample.Some of the advantages of the electronic nose technology over other techniques are the high number of samples that can be analyzed in a short period of time, with good reproducibility and accuracy.However, this technique employs sensors that are not very selective for particular types of compounds, thus preventing any real identification or quantization of individual compounds present in a food sample.Electronic nose technology in conjunction with chemometric analysis (PCA and ANN) has also been successfully applied to the analysis of changes of aroma, along days, released from a flavour encapsulated in a polysaccharide gel matrix, i.e. the time dependence of the aroma pattern emitted by an encapsulated essence (Rodriguez et al., 2010).Chemometric analysis is also applied to alcoholic beverages such as wine samples.Spanish white, red and rose wines were differentiated using a combination of electronic nose and principal components analysis (Guadarrama et al., 2001).On the other hand, authenticity studies have also been carried out for the differentiation of honey samples.Other applications include the determination of the geographical origins of food products and the differentiation of olive oils from adulterated samples with other kind of oils (Cerrato et al., 2002;Cosio et al., 2006).Despite its drawbacks, electronic nose technology remains an area of research that holds much potential for future development in a higher number of applications related to food.Numerous electronic nose studies related to food already have been published.Many of them represent preliminary feasibility studies, but a limited number of them represent basic studies including long term validation of the technique on a specific application.

Spectroscopic techniques and chemometric tools
Spectroscopic techniques have the potential to simplify and reduce analytical times.This section reviews the use of some spectroscopic techniques which combined with chemometric assess food quality.Among the techniques that will be considered are Raman, NIR, MIR, NMR, UV-Vis and fluorescence.Some of the main advantages of spectroscopic methods are simplicity, rapidity, and practically there is no use of sample pre-treatment.However, the capacity of simultaneous determination of several analytes in the same sample or the determination of a single analyte in complex matrices through direct spectroscopic measures diminishes because the spectral overlap.Chemometric tools such as the MVCM (multivariate calibration models) allow the spectral resolution and quantification in the presence of interferents.Further, the detection of discrepant samples of prediction is possible and recently, multiway chemometric techniques have been introduced for the analysis of complex samples (Bro, 2006;Culzoni et al., 2007;Ornelas, 2008;Rodriguez et al., 2008).The advantage of using data involving high-dimensional structured information is the higher stability towards interferents and matrix effects compared with first-order methodologies.There are several algorithms for analyzing second-order data, such as PARAFAC, BLLS (bilinear least-squares) and N-PLS (multiway partial least-squares) (Rodriguez et al., 2008).

www.intechopen.com
Procedures of Food Quality Control: Analysis Methods, Sampling and Sample Pretreatment 77 PARAFAC is a decomposition method, which conceptually can be compared to bilinear PCA.The decomposition is made into triads or trilinear components, and each component consists of one score vector and two loading vectors.An advantage of PARAFAC is the uniqueness of the solution.This means that the true underlying spectra will be found if the right number of components is used and the signal-to-noise ratio is appropriate (Bro, 1997).The decomposition of EEMs (excitation-emission matrices) data into three-dimensional arrays, by PARAFAC, exploits the second-order advantage (Culzoni et al., 2006).This advantage allows direct extraction of the spectral profiles as well as the relative concentrations of individual sample components (Munoz de la Pena et al., 2006).BLLS is a technique based on a direct least-squares procedure (Linder & Sundberg, 1998).It starts with a calibration step in which approximations to pure analyte matrices at unit concentration are found by direct least-squares.To estimate the pure analyte matrices, the calibration data matrices are first vectorised and grouped and, an analogous procedure to classical least-squares is then performed (Faber et al., 2002).Concentration estimation is usually carried out through the least-squares predictor (Linder & Sundberg, 1998).A promising alternative is NPLS, which is a genuine multiway method.But these algorithms do not present the second-order advantage unless they are coupled with a procedure known as RBL (residual bilinearisation) (Cardeal & Marriott, 2009;Casal et al., 2002;Russo & Goretti, 1989).Recently, two reviews about the algorithms and applications of second-and thirdorder multivariate calibration have been published (Bro, 2006;Escandar et al., 2007).

4.1
The Raman, Mid-infrared (MIR 4000 to 400 cm -1 ) and Near-infrared (NIR 14000-4000 cm -1 ) spectroscopic techniques The use of Raman, MIR and NIR spectroscopic techniques in chemical analysis has been broadened for the past two decades due to the advance of technological and scientific innovations.The involved instrumentation in these three techniques is easy to implement at any laboratory, for academic or industrial use, due to the increase of improved instruments in the market.Furthermore, for routine processes these techniques are very practical since special training is not required and the cost of the equipment would be much less than any of those use in chemical analysis.The main feature of these techniques when applied to any sample is that they are not destructive and the time to obtain results is short compared to wet analysis.Thus, if they are applied to food products for quality control the sample under study does not require any pre-treatment.When Raman spectroscopy is compared against IR spectroscopy, it presents the advantage of low sensitivity to water; therefore lends itself for making measurements in liquid environments.On the other hand, it presents high sensitivity to C=C, C≡C and C≡N bonds and thus, the fatty acid composition in foods is easily determinate.Raman spectroscopy also gives high selectivity to inorganic compounds like salts.The Raman spectroscopy has been employed for determination of food proteins.For this case Raman spectra of modified proteins trough analysis of a new C=O stretching vibrational band attributed to -carboxyl groups of aspartic and glutamic acids was used (Wong et al., 2004).For quantitative measurements of carotenoid, collagen and fat in fish muscle the Raman technique was also used (Marquard & Wold, 2004).In the case of edible oil analysis, the Raman technique allowed to investigate the chemical changes taking place during lipid oxidation in several edible oils through oxidative degradation of several vegetable oils by using heating at 160 °C (Muik et al., 2005).
Raman spectroscopy with chemometric analysis was successfully employed for determination of adulterants in honey.For this case honey is adulterated by using cheap invert sugars.Also, it was employed to successfully differentiate honeys of various botanical and geographical origins of the nectar which are related to the chemical composition of honey.Because of consumer preferences, prices of honeys from single plant species are much higher than those of common polyfloral honeys (Lema et al., 2010;Paradkar & Irudayaraj, 2002).On the other hand, the analysis of food samples by using MIR or NIR gives information about molecular bonds and can therefore give details of the types of molecules present in food products.By using NIR spectroscopy one can easily quantify various properties of food such as online measurement of water, proteins, pigments and fat content.When these IR spectroscopic techniques are used jointly with multivariate analysis some results have been reported.For example, the quality control of potato chips was tested trough extraction of oil from potato chips with a carver press; the spectra were collected by using a temperature controlled ATR (attenuated total reflection) accessory (Shiroma & Rodriguez, 2009).These techniques were employed to differentiate between apple juice samples on the basis of apple variety; for this case chemometric procedures were applied to MIR and NIR data where PLS2 and LDA for varietal differentiation applied to PC (principal component) scores (Reid et al., 2005).For the detection of adulterate oil samples and classification of other vegetable oils according to their botanical origin the FTIR with chemometric analysis was used (Gurdeniz & Ozen, 2009;Wang & Li, 2011).NIR and MIR have widely employed for meat quality control.Several works test the ability of both spectroscopic techniques to follow meat freshness decay in high-oxygen modified atmosphere packaged at different temperatures (Sinelli et al., 2010); determination of C22:5 and C22:6 for marine fatty acids analysis in pork adipose tissue (Flatten et al., 2005).FT-IR was exploited to measure biochemical changes within the fresh beef substrate, enhancing and accelerating the detection of microbial spoilage (Ellis et al., 2004).The potential of near infrared spectroscopy to predict texture and colour of dry-cured ham samples was investigated by analyzing of 117 boned and cross-sectioned dry-cured ham samples.The overall accuracy of the classification as a function of pastiness was 88.5%; meanwhile, according to colour was 79.7%.Partial least squares regression was used to formulate prediction equations for pastiness and color.The samples were classified into defective or no defective classes with a correct classification of 94.2% according to pasty texture evaluation and 75.7% as regard to colour evaluation (Garcia Rey et al., 2005).

NMR spectroscopy
NMR spectroscopy involves the analysis of the energy absorption by atomic nuclei with non-zero spins in the presence of a magnetic field.The energy absorptions of the atomic nuclei are affected by the nuclei of surrounding molecules, which cause small local modifications to the external magnetic field.NMR spectroscopy can therefore provide detailed information about the molecular structure of a food sample, given that the observed interactions of an individual atomic nucleus are dependent on the atoms surrounding it.High-resolution NMR (HR-NMR; frequencies above 100 MHz) has been applied in many food products.NMR is conventionally used to obtain detailed chemical information on liquid-state systems.Recent advances in techniques and hardware now allow the routine acquisition of high-resolution C NMR spectra from solid materials.Although 1 H is the most abundant NMR nucleus, there are technical difficulties in high-resolution solid-state 1 H NMR that make 13 C the nucleus of choice (Gidley, 1992).The MNR studies are diverse such as the study of the acyl positional distribution on fish phospholipids in the carbonyl region.A selective enrichment of saturated and monounsaturated fatty acids in the sn-1 position, and a corresponding sn-2 location of polyunsaturated fatty acids (PUFA) was found (Medina & Sacchi, 1994).High resolution 1 H NMR has been applied to monitor the changes in the composition of natural mango juice subjected to spoilage and to microbial contamination with Penicillium expansum; this study was made through monitoring of typical fermentation products, organic acids, amino acids and less abundant components such as oligosaccharides and aromatic compounds (Duarte et al., 2006).An important field for NMR analysis is to assess geographical origin of traditional food; diverse countries have adopted different denominations to label their typical food products in order to give them additional value (Consinni & Cagliani, 2010).The relative deuterium concentration and specific deuterium-site locations in a molecule can be determined using SNIFNMR (Site-Specific Natural Isotope Fractionation-Nuclear Magnetic Resonance).This can provide information about the chemical pathway of formation and, in some cases, information about the geographic origin of a sample can also be discerned (Cross et al., 1998).In this context, food products such as olive oil, dairy milk and cheese have been tested (Bescia et al., 2005;D'Imperio et al., 2007;Karoui & Baerdemaeker, 2007).The major disadvantage of HR-NMR is that it is one of the most expensive analytical techniques to employ.

Fluorescence and UV-Vis spectroscopy
Fluorescence spectroscopy has been used to assess different food proprieties by using different methods such as synchronous front-face fluorescence spectroscopy along chemometric methods.A review on this matter has been reported by Sadecka and Tothova, 2007.In particular some applications of such methods have been reported, for example for the determination of microbial load on chicken breast fillets stored aerobically (Shar et al., 2011).Studies of olive oil adulteration and classification of virgin olive oil from other types were also made (Guimet et al., 2004;Guimet et al., 2005).Similar to other spectroscopic techniques, fluorescence spectroscopy has a large potential for being used in food research and this potential has been increasing during last years.Fluorescence spectroscopy can determinate some properties of food products without sample preparation since it is a nondestructive technique; there is no use of chemicals; food can be solid or liquid and the time of performance to obtain results can be very short when compared against those wet techniques; it can be used to analyze compounds at concentration as low as parts per billion, and among anything many food products contain intrinsic compounds that emit light when excited with the proper light-wavelength.Other food products that have been studied under different techniques are alcoholic beverages, e.g.beers, table wines and spirits.Some works have shown the effectiveness of applying chemometric methods to different spectroscopic techniques.For example, NIR spectra to estimate the ripeness of wine grapes (Herrera et al., 2003), and to FT-IR and UV spectra for the characterization and classification of wine (Guillén et al., 2005).The alcohol content in beverages has also been done through spectroscopic techniques (Barboza & Poppi, 2003).Differentiation between brandies and wine distillates has been achieved and reported through luminescence (Tothova et al., 2009).Thus, it can be claimed that multivariate data analysis has played a good rule jointly with chromatographic and spectroscopic methods when applied to food products, in particular, for alcoholic beverages (Barbosa Garcia et al., 2007;Edelmann et al., 2001;Mattarucci et al., 2010;Navas & Jimenez 1999;Savchuk et al., 2001).On the other hand, with UV-Vis spectroscopy table wines have also been classified and differentiated (Urbano et al., 2006) and the authentication of whiskeys through absorption spectroscopy has been reported (MacKenzie & Aylott, 2004).The UV-Vis spectroscopy with chemometric analysis is a simple way to distinguish the types of the popular spirit made from the juice of the agave plant, i.e. tequila (Barbosa Garcia et al, 2007).Two kinds of tequilas are produced, the 100% blue agave and the mixed which is only 51% of blue agave.Mezcal is another spirit made from different type of agave and also could be distinguished from tequila with similar methodology.The quality control of these spirits, along the process of production, could be possible in situ.Another result from these studies is the differentiation among different brands of tequila and all these results can be extended to other spirits.Therefore, adulteration and authentication of tequilas is possible to be recognized though a simple method.These results were made through the UV-Vis absorption spectroscopy and validate through HPLC (Contreras et al., 2010;Munoz Munoz et al., 2010).

Conclusions
In addition to sensory parameters such as aroma, colour or taste, the quality of food all over the world is based on measurable factors such as chemical and nutritional composition (percentage of sugar, fatty acids, antioxidants, protein, fibre etc.), as well as microbiological control.In this chapter, an overview of some of the most used methodologies for quality control of food products has been shown.It is clear that chemical analysis plays an important role in this matter and the choice of appropriate analytical method should be treated as an essential aspect.This is so since the chosen method for analysis should be adequate to the purpose for which results are required, e.g., determination of forbidden and permitted substances (application of food laws and official regulations) or monitoring of key compounds that enhance the food value.Besides the suitable method of analysis, it was pointed out in this chapter that sampling and pre-treatment of food products play an important rule in the quality control process for food products.A short review of sampling procedures was given and the pre-treatment on food was pointed out along the presented analytical method.New trends on the quality control processes are clearly seen nowadays.Industry needs to reduce costs and time during the test of food products.Besides high levels of specificity and accuracy, analysis along the line of production is needed.Pre-treatment of samples should be minimized to reduce costs and times for testing food products.Thus, in this chapter it was shown that there are a great variety of analytical methods for food quality control.However those based on chromatography are the best bet since new methodologies with chemometric analysis are rising up.The main developments in chromatographic methods are focused on the reductions of analysis times.Nowadays there are chromatographic runs lasting only a few minutes.These improvements increase the industrial potential of chromatography, although the use of MS detection would vastly increase the cost.The electronic nose technology has the advantage of being relatively cheap, quick and easy to operate.On the other hand, spectroscopic methodologies are the newest that are becoming stronger in the field.They have some interesting features not seen in other analytical methods, e.g.no-pretreatment of samples, the cost of laboratory equipment for routine analysis is low, can be fast compared to other methods, can be performed in situ along the line of production and so on.Moreover, some other techniques as NMR spectroscopy, despite its high level of specificity and accuracy in food characterization, have disadvantages in cost terms for online applications like chromatographic techniques.

Fig. 1 .
Fig. 1.The normal probability density function with different µ and σ 2 values us how far a set of samples are spread out from each other, the root square of the variance tell us how far samples lie from µ, i.e., it describes the spread of the data around the mean, see Fig.2.In this figure the values of 2|σ 1 |, 2|σ 2 |, 2|σ 3 | and 2|σ 4 | are 68.2,95.4,99.6 and 99.8 %, respectively, of the unit area under f(x) given by Eq. 2.