Open access

Imprecise Uncertainty Modelling of Air Pollutant PM10

Written By

Danni Guo, Renkuan Guo, Christien Thiart and Yanhong Cui

Submitted: 22 October 2010 Published: 17 August 2011

DOI: 10.5772/17760

From the Edited Volume

Advanced Air Pollution

Edited by Farhad Nejadkoorki

Chapter metrics overview

1,989 Chapter Downloads

View Full Metrics

1. Introduction

Particulate matter (PM) refers to solid particles and liquid droplets found in air. Many manmade and natural sources produce PM directly, or produce pollutants that react in the atmosphere to form PM. The resultant solid and liquid particles come in a wide range of sizes, and particles that are 10 micrometers or less in diameter (PM10) can be inhaled into and accumulate in the respiratory system and are believed to pose health risks (Environmental Protection Agency, 2010). Particulate matter is one of the six primary air pollutants the Environmental Protection Agency (EPA) regulates, due to exposure to high outdoor PM10 concentrations causes increased disease and death (Environmental Protection Agency, 2010). Therefore, PM10 concentrations, amongst many other air pollutants, are sampled and measured in various places in California, United States.

The general trend of PM air pollutant concentrations in the air in California are on the decrease, but it continues to be monitored and observed. The California standards for annual PM10 concentrations is that the annual arithmetic mean is 20 µg/m3, and the national standard is 50 µg/m3 before 2006 (California Environmental Protection Agency Air Resources Board, 2010, Environmental Protection Agency, 2010). The State of California sets very high standards for their air quality, and air pollutants are carefully monitored.

However, in reality, it is too costly in terms of time, finance, and manpower to keep all the 213 sites to be monitoring and recording. In Fig. 1, a complete map of all 213 sample locations for PM10 are shown. However, one must note that these sample sites are never all used at any given year, PM10 samples are taken at different locations each year. At best, a maximum of 102 PM10 samples are collected during some years, and at worst, 61 PM10 samples are collected at that year. Therefore comparisons of PM10 between years are difficult, due to missing data at sample sites. It is difficult to construct kriging maps in terms of actual observations annually since the air pollutants were measured in different locations each year although the site design originally planned was quite delicate statistically.

Each year, approximately 40% of the 213 sites were actually observed. We call a site that does not have a recorded PM10 value as "missing value", and since there are no patterns so that serious problems would twist the kriging map constructions. In Fig. 2, this is clearly demonstrated. In 1989, there are 61 PM10 samples collected (29% of 213 locations), and in 2000, there are 94 PM10 samples collected (44%).

Figure 1.

Complete 213 Observational Sites in the California State

Figure 2.

PM10 Samples Collected in California in 1989 (61 sites) and 2000 (94 sites)

The data scarcity brings in a series of (five) fundamental issues into the spatial-temporal modelling and prediction practices for California PM10 data, namely:

  1. The necessity to recognize the impreciseness in analyzing the spatial-temporal pattern in terms of California PM10 records, which inevitably acts the solidness of a geo-statistical analysis;

  2. Which theoretical foundations are appropriate for modelling impreciseness uncertainty;

  3. How to fill up the "missing value" sites so that the "complete" records are available, which is either an original annual average from the original observations (40%) recorded on the site or a or predicted value by "neighbourhood sites" (60%), i.e., to facilitate spatial-temporal imprecise PM10 value by interpolations and extrapolations;

  4. How to estimate the parameters of uncertain processes (temporal patterns), particularly the rate of change parameterαi, i=1,2,,213;

  5. Create annual kiging maps (19 maps) under spatially isotropy and stationarity assumptions so that the changes between annual maps can be analyzed by kriging map difference between 2007 and 1989 and kriging map of location rate of change.

These issues will be addressed in the remaining sections sequentially.


2. The necessity of modelling impreciseness in California PM10 spatial-temporal analysis

Impreciseness is a fundamental and intrinsic feature in the PM10 spatial-temporal modelling, due to the observational data shortage and incompleteness. Spatially, there are 213 sites involved, and temporally, PM10 observations were collected from 1989 to 2007, over a 19-year period. During the 19-year period, there are only two sites (Site 2125 and Site 2804) having complete 19 year records. There are 16 sites having only 1 record (8%) and 70 sites having 10 or above records. To have a statistically significant time-series analysis, 50 data points are minimal requirement for each site, so classical time-series analysis (probabilistic analysis) cannot be performed. In order to have a quick overall evaluation of PM10 records on each site, we borrow the statistical quality control idea here (Electric, 1956, Montgomery, 2001). But we do not carry on traditional 6-sigma rule, rather, classify the PM10 records into four groups: 1-(5,20], 2-(20,35], 3-(35,50], 4-(50,160]. These four-group limits in Table 1 reflect the national standard, (50 µg/m3) and California state standard (20 µg/m3) respectively. For example, 1 -(5,20]is for a location whose PM10 fall in 5 to 20 (µg/m3).

County name1-(5,20]2-(20,35]3-(35,50]4-(50,160]No. of Sites
Los Angeles3710
San Diego538

Table 1.

PM10 Hazard level evaluation over selected 7 counties

One must be aware that the classification is not in absolute sense, rather, additional rules are adding (similar to quality control chart pattern analysis (Electric, 1956):

(1) if a single point, then, classify the site hazard level according to which group it falls in; (2) if a sequence of records, some of them, particularly early points may fall in higher (or lower) hazard level, but if last three points fall in a lower (or higher) hazard level, the later level would be chosen for the site.

The additional rule 1 can attribute to expert's knowledge confirmation, while the additional rule 2 can be regarded as an expert's decision based on trend pattern.

Figure 3.

The 7 sites from the selected 7 counties with original PM10 data plots and the hazard level classifications

Fig. 3 shows the classifications of a seven sites from the selected 7 counties in Table 1, each county one site is picked up for illustration purpose. The red coloured plot means the hazard level 1(5,20]; the green coloured plot means the hazard level 2(20,35]; the purple coloured plot means the hazard level 3(35,50]; and the black coloured plot means the hazard level 4(50,160].

It is evident that facing the impreciseness caused by incomplete data recording, one has to rely on expert's knowledge to compensate the inadequacy and accuracy in collected observational data. Impreciseness is referred to a term with a connotation specified by an uncertain measure or an uncertainty distribution for each of the actual or hypothetical members of an uncertainty population (i.e., collection of expert's knowledge). An uncertain process is a repeating process whose outcomes follow no describable deterministic pattern, but follow an uncertainty distribution, such that the uncertain measure of the occurrence of each outcome can be only approximated or calculated.

The uncertainty modelling without a measure specification will not have an rigorous mathematical foundations and consequently the modelling exercise is baseless and blindness. In other words, measure specification is the prerequisite to spatial-temporal data collection and analysis. For example, without Kolmogrov's (1950) three axioms of probability measure, randomness is not defined and thus statistical data analysis and inference has no foundation at all.

Definition 2.1: Impreciseness is an intrinsic property of a variable or an expert's knowledge being specified by an uncertain measure.

It is therefore inevitably to seek appropriate form of uncertainty theory to meet the impreciseness challenges. In the theoretical basket, interval uncertainty theory (Moore, 1966), fuzzy theory (Zadeh, 1965, 1978), grey theory (Deng, 1984), rough set theory (1982), upper and lower provisions (or expectations) (Walley, 1991), or Liu’s uncertainty theory (2007, 2010) may be chosen.

While imprecise probability theory (Utikin and Gurov, 1998) may be a typical answer to address the observational data inaccuracy and inadequacy. However the imprecise probability based spatial modelling requires too heavy assumptions. Just as Utikin and Gurov (2000) commented, “the probabilistic uncertainty model makes sense if the following three premises are satisfied: (i) an event is defined precisely; (ii) a large amount of statistical samples is available; (iii) probabilistic repetitiveness is embedded in the collected samples. This implies that the probabilistic assumption may be unreasonable in a wide scope of cases.” Guo et al. (2007) and Guo (2010) did attempt to address the spatial uncertainty from the fuzzy logic and later Liu's (2007) credibility theory view of point.

Nevertheless, Liu’s (2007, 2010) uncertainty theory is the only one built on an axiomatic uncertain measure foundation and fully justified with mathematical rigor. Therefore it is logical to engage Liu’s (2007, 2010, 2011) uncertainty theory for guiding us to understand the intrinsic character of imprecise uncertainty and facilitate an accurate mathematical definition of impreciseness in order to establish the foundations for uncertainty spatial modelling under imprecise uncertainty environments.


3. Uncertain measure and uncertain calculus foundations

Uncertainty theory was founded by Liu in 2007 and refined in 2010, 2011. Nowadays uncertainty theory has become a branch of mathematics.

A key concept in uncertainty theory is the uncertain measure, which is a set function defined on a sigma-algebra generated from a non-empty set. Formally, let Ξ be a nonempty set (space), and A(Ξ) the σ-algebra onΞ. Each element, let us say,AΞ ,AA(Ξ) is called an uncertain event. A number denoted asƛ{A}, 0ƛ{A}1, is assigned to eventAA(Ξ), which indicates the uncertain measuring grade with which event AA(Ξ) occurs. The normal set functionƛ{A}satisfies following axioms given by Liu (2011):

Axiom 1: (Normality)ƛ{Ξ}=1.

Axiom 2: (Self-Duality) ƛ{}is self-dual, i.e., for anyAA(Ξ),ƛ{A}+ƛ{Ac}=1.

Axiom 3: (σ-Subadditivity) ƛ{i=1Ai}i=1ƛ{Ai}for any countable event sequence{Ai}.

Axiom 4: (Product Measure) Let (Ξk,AΞk,ƛk)be the kthuncertain space,k=1,2,,n. Then product uncertain measure ƛon the product measurable space(Ξ,AΞ)is defined by






That is, for each product uncertain event ΛAΞ (i.e,Λ=Λ1×Λ2××ΛnAΞ1×AΞ2××AΞn=AΞ), the uncertain measure of the event Λis

ƛ{Λ}={supA1××AnΛmin1knƛ{Λk}    if supA1××AnΛmin1knƛ{Λk}>0.51supA1××AnΛcmin1knƛ{Λk}if supA1××AnΛcmin1knƛ{Λk}>0.50.5                             otherwiseE4

Definition 3.1: (Liu, 2007, 2010, 2011) A set function ƛ:A(Ξ)[0,1] satisfies Axioms 1-3 is called an uncertain measure. The triple (Ξ,A(Ξ),ƛ) is called an uncertainty space.

Definition 3.2: (Liu, 2007, 2010, 2011) An uncertainty variable is a measurable function ξfrom an uncertainty space (Ξ,A(Ξ),ƛ)to the set of real numbers, i.e., for any Borel set B of real numbers, the set{τΞ:ξ(τ)BB()}A(Ξ), i.e., the pre-image of B is an event.

Remark 3.3: Parallel to revelation of the connotation of randomness in geostatistics, impreciseness occupies an fundamental position in geospatial-temporal uncertainty statistical analysis. In California PM10 spatial-temporal study, nearly 60% sites do not have "complete" temporal sequences so that in order to fill the "missing" observations, we have to engage expert's knowledge to pursue "complete sequences" (i.e., to have 19 PM10 values at each individual site), which is inevitably imprecise and incomplete. Impreciseness is referred to a term here with an intrinsic property governed by an uncertainty measure or an uncertainty distribution for each of the actual or hypothetical members of an uncertainty population (i.e., collection of expert's knowledge). An uncertainty process is a repeating process whose outcomes follow no describable deterministic pattern, but follow an uncertainty distribution, such that the uncertain measure of the occurrence of each outcome can be only approximated or calculated.

Remark 3.4: Impreciseness exists in engineering, business and research practices due to measurement imperfections, or due to more fundamental reasons, such as insufficient available information,..., or due to a linguistic nature, because it is an unarguable fact that impreciseness exists intrinsically in expert’s knowledge on the real world.

Definition 3.5: Letξbe a uncertainty quantity of impreciseness on an uncertainty measure space(Ξ,A(Ξ),ƛ). The uncertainty distribution of ξ is


An imprecise variable ξ is an uncertainty variable and thus is a measurable mapping, i.e.,ξ:D, D. An observation of an imprecise variable is a real number, (or more broadly, a symbol, or an interval, or a real-valued vector, a statement, etc), which is a representative of the population or equivalently of an uncertainty distributionΨξ() under a given scheme comprising set and σ-algebra. The single value of a variable with impreciseness should not be understood as an isolated real number rather a representative or a realization from the uncertain population.

Definition 3.6: (Lipschitz condition) Let f(x)be a real-valued function,f:. If for anyx,yn, there exists a positive constantM>0, such that


Definition 3.7: (Lipschitz continuity) Let f:mm

  1. BmBfBM>0
f(x)f(y)<Mxy, x,yBE7

where is some metric (for example, Euclidean distance inm), such

d(f(x),f(y))<Md(x,y), x,yBE8
  1. for eachzm, fis Lipschitz continuous locally on the open ballBof center zradius M>0 such

  1. fm

Remark 3.8: For continuity requirements, Lipschitz continuous function is stronger than that of the continuous function in Newton calculus but it is weaker than the differentiable function in Newton differentiability sense. In other words, Lipschitz-continuity does not warrant the first -order differentiability everywhere but it does mean nowhere differentiability. Lipschitz-continuity does not guarantee the existence of the first-order derivative everywhere, however, if exists somewhere, the value of the derivative is bounded since


by recalling the definition of the Newton derivative


Similar to the concept of stochastic process in probability theory, an uncertain process {ξt,t0}is a family of uncertainty variables indexed by tand taking values in the state spaceS.

Definition 3.9: (Liu 2010, 2011) Let {Ct,t0}be an uncertain process.

  1. C0=0
  2. {Ct,t0}
  3. every increment Ct+sCsis a normal uncertainty variableb with expected value 0 and variancet2, i.e., the uncertainty distribution of Ct+sCsis


Then {Ct,t0}is called a canonical process.

Remark 3.10: Comparing to Brownian motion process {Bt,t0}in probability theory, which is continuous almost everywhere and nowhere is differentiable, while Liu's canonical process {Ct,t0}is Lipschitz-continuous and if{Ct,t0}is differentiable somewhere, the derivative is bounded. Therefore{Ct,t0}is smoother than{Bt,t0}.

Definition 3.11: (Liu, 2010, 2011) Suppose {Ct,t0}is a canonical process, and fand gare some given functions, then


is called an uncertain differential equation. A solution to the uncertain differential equation is the uncertain process{ξt,t0}satisfying it for any t>0.

Remark 3.12: Since dCt and dξt are only meaningful under the umbrella of uncertain integral, i.e., the an uncertain differential equation is an alternative representation of


Definition 3.13: The geometric canonical process {Gt,t0}satisfies the uncertain differential equation


has a solution


where αcan be called the drift coefficient and σ>0can be called the diffusion coefficient of the geometric canonical process {Gt,t0}due to the roles played respectively.


4. Spatial interpolation and extrapolation via inverse distance approach

Statistically, spatial interpolation and extrapolation modeling is actually a kind of linear regression modeling exercises, say, kriging methodology. Considering the shortage of California PM10 data records, we will utilize a weighted linear combination approach, which was first proposed by Shepard (1968). The weights are the inverse distances between the missing value cell to the actual observed PM10 value cells. The weight construction is a deterministic method, which is neutral and does not link to any specific measure theory. It is widely used in spatial predictions and map constructions in geostatistics, but is not probability oriented, rather, molecular mechanics stimulated. A unique aspect of geostatistics is the use of regionalized variables which are variables that fall between random variables and completely deterministic variables. The weight of an observed PM10 value is inversely proportional to its distance from the estimated value.


Figure 4.

The 7 sites from the selected 7 counties with completed 19 year observations of PM10

We wrote a VBA Macro to facilitate the interpolations and the extrapolations to "fill" up the 2048 missing value cells in terms of the 1639 cells with PM10 values. With the interpolations and the extrapolations, every site has 19 PM10 values now. As to whether the inverse distance approach can facilitate highly accurate predictions for each cell without a observed PM10 value, we performed a re-interpolation and re- extrapolation scheme (by deleting a true PM10 record, then fill it by the remaining records one by one) to evaluate the mean square value for error evaluation, the calculated mean of sum of error squares is 59.885, which is statistically significant (asymptotically).

We plotted sites 2045, 2744, 2199, 2263, 2297, 2914, and 2248 (appeared in Fig. 3) respectively in Fig. 4. By comparing Fig. 3 and 4, it is obvious that only Site 2744 the hazard level changed (moving up to next higher hazard level), while the hazard level of other six sites are unchanged. This may give an justification of the inverse distance approach. Keep in mind, the aim of this article is investigate whether the PM10 level is changed over 1989 to 2007 19-year period. The change is not necessarily be accurate but reasonably calculated because of the impreciseness features of PM10 complete records.


5. Uncertain analysis of site temporal pattern

Once the interpolations and the extrapolations in terms of the inverse distance approach is completed, a "complete" data set is available, containing 4047 data records of 213 sites over 19 years. The next task is for a given site, how to model the uncertain temporal pattern. It is obvious that the "complete" data set contains impreciseness uncertainty due to the interpolations and the extrapolations. We are unsure that the impreciseness uncertainty is of random uncertainty, so that we still use uncertain measure theory to pursue the temporal uncertainty modelling.

Recall that the Definition 3.13 in Section 3 facilitates a uncertain geometric canonical process,{Gt,t0}. Notice that G0=0may not fit the data reality so that we propose a modified uncertain geometric canonical process, {Gt*,t0}withG0>0:


Note that

Letyt=lnGt*, α0=lnG0, then we have

Recall the relevant definitions in Section 3, we have

Ε[Ct]=0, and V[Ct]=t2E21

But note that fors<t,


Notice that the incrementCtCsis independent ofCs, i.e., Ctsis independent ofCs.



since if ξ1and ξ2are independent uncertain variables with uncertainty distributions Ψξ1and ϒξ2 respectively, then the joint uncertainty distribution of (ξ1,ξ2) isΦξ1,ξ2(z1,z2)=min{Ψξ1(z1),ϒξ2(z2)}. Hence we obtain the expression ofσs,t:


Then the ith"variance-covariance" matrix for uncertain vector(yi1,yi2,,yi19)'


where iis the site index, j,k=1,2,,19are the entry pair inΓimatrix. Hence we have a regression model (Draper and Smith, 1981, Guo et al., 2010, Guo, 2010).

For the ithsite, the regression model is


Then in terms of the weighted least square criterion we can define an objective function as



Y¯i=[yi1yi2yi19]  β¯=[αi0αi]  X=[1111219]E28

We further notice that


Then it is reasonable to estimate αi by


Furthermore, we notice that


Also, we can evaluate


in terms of numerical integration, Then an estimate for Γimatrix is obtained:


Finally, we use the approximated objective function


to obtain a pair of estimates(α˜i0,α˜i). Repeat this estimation process until all the 213 weighted least square estimate (α˜i0,α˜i)are obtained.

Recall the definition of coefficient αi so that the sign and the absolute value of αi indicates the geometric change over the 19 years. Since the estimation procedure of αi involves all the spatial-temporal information, it is reasonable to have them plotted in a kriging map to reveal the overall changes over 19-year period.


6. Kriging maps and time-change maps based on completed PM10 data

Kriging map presentation is vital for a geostatistian's visualization, and maps reveal hidden information or the whole picture. A sample statistic is typically condensing the wide-spread information into a numerical point. While, a kringing map is actually a map statistic (or a statistical map) which contains infinitely many information aggregated from limited "sample" information (i.e., observations). Kriging itself is not specifically probability oriented, it is another weighted linear combination prediction, but requires more mathematical assumptions. In fuzzy geostatistics, the fuzzy kriging scheme has also been developed (Bardossy et al., 1990).

Ordinary kriging (abbreviated as OK) is a linear predictor, see Cressie (1991) and Mase (2011). The formula is


where sj are spatial locations with observation Z(sj)available and the coefficients λj satisfy the OK linear equation system

{j=1Nλjγ(ε(sj)ε(si))ψ=γ(ε(s0)ε(si)), i=1,2,,Nj=1Nλj=1E36

The OK system is generated under the assumptions of an additive spatial model


where µ(s) is the basic (expected) spatial trend and ε(s) is a Gaussian errorN(0,σ2(s)), i.e., Gaussian variable with mean and variance

Ε[ε(s)]=0, V[ε(s)]=σ2(s)E38

respectively. Accordingly, the variogram 2γ of the random error function ε() is just defined by


where h is the separate vector between two spatial point s+h and sunder the isotropy assumption.

Figure 5.

Kriging Prediction Maps for PM10 in California 1989-2007.

The 213 observation sites now have 19-year PM10 values, a "complete" data set is now available, containing 4047 data records of 213 sites over 19 years, and then the 19 ordinary kriging pred4iction maps are generated for comparisons. In Fig. 5, all 19 years of PM10 concentration in California State are shown. It is very interesting to examine the change in PM10 concentrations through the 19 years, based upon the modelled complete 213 site data. In particular, 1998 shows to have an extremely low PM10 concentration. Although air quality is varied over the years, but in general, the PM10 concentration is decreasing, showing an improvement of air quality trend.

Figure 6.

Changes in PM10 values and the rate of change of PM10 in California between 1989 and 2007.

As one can clearly see from Fig. 6, that PM10 concentration has clearly decreased over the 19 years, and air quality has improved remarkably over the years. The blue and green colours show negative changes, and red shows positive changes or near positive changes. Counties such as San Diego, Inyo, Santa Barbara, Imperial, still show an increase in PM10 concentration in the air, and indicate bad air quality. While Kern, Modoc, Siskiyou counties show the most improvement in air quality. The left map in Fig. 6 is PM10 record difference between 2007 and 1989 at each location, in total 231 values, and then a difference map is constructed. It is obvious that the difference map only utilizes 1989 and 2007 two-year PM10 records, 1990, 1991,..., 2006 seventeen years' information do not participate the change map construction. The right map in Fig. 6 show completedα˜i, i=1,2,,213, the rate of change over 1989 to 2007 19-year period.

Note that the calculations of α˜i, i=1,2,,213involve all nineteen years by temporal regression, the dependent variable yare estimated form the actual PM10 observations cross over all the available locations. Therefore, the rate of change parameter αiat each individual location contains all spatial-temporal information. It is reasonable to say the rate of change parameter α˜iis an aggregate statistic for revealing the 19-year changes over 213 locations. α˜ikriging map is thus different from 2007-1989 kriging maps. The positive sign of α˜i indicates the increasing trend in PM10 concentration, while the negative sign f α˜i indicates the decreasing trend in PM10 concentration. The absolute value of α˜ireveals the magnitude of change of PM10 concentration. It is worth to report, among 213 locations, 193 locations have negativeα˜i, while the negative α˜ilocations are 20 (9% approximately).


7. Discussion

Air quality and health is always a central issue to public concern on the quality of life. In this chapter, we examined PM10 levels over 19 years, from 1989 to 2007, in the California State. Facing the difficult task of a lack of "complete" PM10 observational data, we utilised the inverse distance weight methodology to "fill" in the locations with missing values. By doing so, the impreciseness uncertainty is introduced, which is not necessarily explained by probability measure foundation. We noted the character of a regionalized variable in geostatistics and therefore engage Liu's (2010, 2011) uncertainty theory to address the impreciseness uncertainty. In this case, we developed a series of uncertain measure theory founded spatial-temporal methodology, including the inverse distance scheme, the kriging scheme, and the geometric canonical process based weighted regression analysis in order to extract the change information from the incomplete 1989-2007 PM10 records. The use of the rate of change parameter alpha is a new idea and it is an aggregate change index utilized all spatial-temporal data information available. It is far better than classical change treatments. However, due to the limitations of our ability, we are unable to demonstrate the detailed uncertain measure based spatial analysis model. In the future research, we plan to develop a more solid uncertain spatial prediction methodology.



I would like to thank the California Air Resources Board for providing the air quality data used in this paper. This study is supported financially by the National Research Foundation of South Africa (Ref. No. IFR2009090800013) and (Ref. No. IFR2011040400096).


  1. 1. BardossyA.BogardiI.KellyE.1990Kriging with imprecise (fuzzy) variograms, I: Theory. Mathematical Geology, 226379
  2. 2. California Environmental Protection Agency Air Resources Board.2010Ambient Air Quality Standards (AAQS) for Particulate Matter. (
  3. 3. CressieN.1991Statistics for Spatial Data. Wiley-Interscience, John-Wiley & Sons Inc. New York.
  4. 4. DengJ. L.1984Grey dynamic modeling and its application in long-term prediction of food productions. Exploration of Nature, 33743
  5. 5. DraperN.SmithH.1981Applied Regression Analysis. 2nd Edition. John Wiley & Sons, Inc. New York.
  6. 6. ElectricW.1956Statistical Quality Control Handbook. Western Elctric Corporation, Indianapolis.
  7. 7. Environmental Protection Agency (EPA).2010National Ambient Air Quality Standards (NAAQS). U.S. Environmental Protection Agency. (
  8. 8. GuoD.GuoR.ThiartC.2007Predicting Air Pollution Using Fuzzy Membership Grade Kriging. Journal of Computers, Environment and Urban Systems. Editors: Andy P Jones and Iain Lake. Elsevier, 31133510198-9715
  9. 9. GuoD.GuoR.ThiartC.2007Credibility Measure Based Membership Grade Kriging. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 15No. Supp. 2, (April 2007), 5366B.D. Liu (Editor). 0218-4885
  10. 10. GuoD.2010Contributions to Spatial Uncertainty Modelling in GIS. Lambert Academic Publishing ( 978-3-84337-388-3
  11. 11. GuoR.CuiY. H.GuoD.2010Uncertainty Statistics. (Submitted to Journal of Uncertainty Systems, under review)
  12. 12. GuoR.CuiY. H.GuoD.2010Uncertainty Linear Regression Models. (Submitted to Journal of Uncertainty Systems, under review)
  13. 13. GuoR.GuoD.ThiartC.2010Liu’s Uncertainty Normal Distribution. Proceedings of the First International Conference on Uncertainty Theory, August 11-19, 2010, Urumchi & Kashi, China, 191207Editors: Dan A. Ralescu, Jin Peng, and Renkuan Guo. International Consortium for Uncertainty Theory. 2079-5238
  14. 14. KolmogorovA. N.1950Foundations of the Theory of Probability. Translated by Nathan Morrison. Chelsea, New York.
  15. 15. LiuB. D.2007Uncertainty Theory: An Introduction to Its Axiomatic Foundations. 2nd Edition. Springer-Verlag Heidelberg, Berlin.
  16. 16. LiuB. D.1 EOF2010Uncertainty Theory: A Branch of Mathematics of Modelling Human Uncertainty. Springer-Verlag, Berlin.
  17. 17. LiuB. D.Uncertainty Theory, 4th Edition, 17 February, 2011drafted version.
  18. 18. LiuS. F.LinY.2006Grey Information. Springer-Verlag, London.
  19. 19. MaseS.GeoStatistics and Kriging Predictors, In: International Encyclopedia of Statistical Science. Editor: Miodrag Lovric, 1st Edition, 2011LVIII, 609612Springer.
  20. 20. MontgomeryD. C.2001Introduction to Statistical Quality Control. 4th Edition. John Wiley & Sons, Now York.
  21. 21. MooreR. E.1966Interval Analysis. Prentice-Hall, Englewood Cliff, NJ. 0-13476-853-1
  22. 22. PawlakZ.1982Rough Sets. International Journal of Computer and Information Sciences, 11341356
  23. 23. ShepardD.1968A two-dimensional interpolation function for irregularly-spaced data. Proceedings of the 1968 ACM National Conference, 517524
  24. 24. UtkinL. V.GurovS. V.1998New reliability models on the basis of the theory of imprecise probabilities. Proceedings of the 5th International Conference on Soft Computing and Information Intelligent Systems, 2656659
  25. 25. UtkinL. V.GurovS. V.2000New Reliability Models Based on Imprecise Probabilities. Advanced Signal Processing Technology, Soft Computing. Fuzzy Logic Systems Institute (FLSI) Soft Computing Series- 1110139Charles Hsu (editor). Publisher, World Scientific. November 2000. 978-9-81279-210-5
  26. 26. WalleyP.1991Statistical Reasoning with Imprecise Probabilities. London: Chapman and Hall. 0-41228-660-2
  27. 27. ZadehL. A.1965Fuzzy sets. Information and Control, 8338353
  28. 28. ZadehL. A.1978Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and Systems, 1328

Written By

Danni Guo, Renkuan Guo, Christien Thiart and Yanhong Cui

Submitted: 22 October 2010 Published: 17 August 2011