Open access peer-reviewed chapter

Image Simplification Using Kohonen Maps: Application to Satellite Data for Cloud Detection and Land Cover Mapping

Written By

Suzanne Angeli, Arnaud Quesney and Lydwine Gross

Submitted: March 22nd, 2012 Reviewed: July 5th, 2012 Published: November 21st, 2012

DOI: 10.5772/51352

Chapter metrics overview

2,370 Chapter Downloads

View Full Metrics

1. Introduction

In this study, we present classification results obtained by applying Kohonen maps to Earth observation data. Earth observation refers to the use of sensors onboard artificial satellites dedicated to the monitoring of the Earth surface-atmosphere system geophysical properties. These sensors may perform passive or active remote sensing.

In the domain of Earth Observation, the information delivered by optical telescopes is most of the time multi-spectral, meaning that several light wavelengths are measured from near ultraviolet to thermal infrared, depending on the mission [1, 2, 3]. In Synthetic Aperture Radar (SAR) imagery, if most of the time only one information is available (the unpolarised backscatter cross-section in the SAR emitted wavelength), recent satellite missions [4, 5, 6] deliver polarised response, leading to two to four variables. In both cases (optical and SAR), Earth Observation interpretation is thus a multivariate problem.

Physical theory used to interpret passive observations is called radiative transfer, while electromagnetism is used to interpret active sensor measurements. In both cases, the interpretation of satellite images corresponds to the inversion of the physical theory that explains observation as a function of geophysical parameters describing surface properties and atmosphere. Because satellite imagery is geo-referenced, this retrieval leads to cartographic maps of Earth-atmosphere geophysical variables (atmosphere gas contents, aerosols and clouds optical thickness, vegetation chlorophyll content, surface soil moisture, sea surface salinity etc.).

But physically interpreting satellite imagery requires a classification step prior to the retrieval of geophysical parameters. Indeed, for the same bio-geophysical variable, the physical inversion scheme may be different depending on the processed pixel location on Earth. As an example, the chlorophyll concentration in mg.m-3 is not estimated by the same algorithm if the observed pixel is located on a continental land surface or on liquid water surface. Moreover, because physical inversion is often an ill-posed problem, an identification step is required to simplify this problem by a priori knowledge introduction (variables initialisation or default values). Indeed, land cover maps are often used as auxiliary inputs of physical inversion. A good example is the land SMOS (Soil Moisture and Ocean Salinity) processing chain [7] which requires soil properties and land cover classifications. These data give useful information over the observed surfaces and allow their modelling trough a reduced number of free parameters which remain to retrieve (soil moisture, optical thickness of the vegetation cover).

Physicists generally use very simple methods to classify the pixels according to their geophysical content: they apply thresholds or threshold cascades, equivalent to decision trees based on linear tests, on spectral information or spectral information linear compositions. The choice of the tested variables at each node of the decision tree is based on physical knowledge, the thresholds values are found empirically by an operator on a few observation cases. Typical examples are the MODIS (Moderate Resolution Imaging Spectroradiometer) level 2 cloud mask [8] and the Météo-France ECOCLIMAP classification of land cover at 1 km of resolution [9].

But these kinds of algorithms are very costly in time to develop, and are often imperfect because remote sensing problems cannot be stacked into linear subspaces. Indeed, for most applications, classification problems on SAR and optics imageries are non linear and continuous, and should be addressed by Bayesian theory. Moreover, because thresholds are very sensitive, each time a new sensor is launched, physicists have to redefine the decision trees or the thresholds values. We thus decided to attempt more automatic and more mathematically adapted solution to the problem of satellite observation geophysical identification.

In order to underline the achieved progress when using Kohonen maps instead of linear decision trees, we applied the Mixed Topological Map algorithm (MTM) [10] to two well-known classification problems in satellite imagery: cloud detection on low resolution (LR) optics imagery and land cover mapping on high resolution (HR) optics imagery. In the first use-case, we compared the results obtained by the MTM on MODIS data algorithm with the Level 2 MODIS cloud mask [8]. We showed that cloud detection is largely improved over bright surfaces like snow and deserts using the MTM algorithm. In the second use-case, because no real-time labelled land cover is available at high resolution spatial scale, we validated our results by projecting HRG sensor images (Haute Résolution Géométrique [11]) classified by the MTM algorithm on Google Earth cartographic background, showing that temporally stable structures (down-town areas, rivers and forests) are correctly detected.

We propose to organise this chapter into four paragraphs. The first paragraph describes our methodology. The second one details the application of the MTM algorithm to MODIS data for cloud detection and includes a cloud detection state of the art. The third paragraph describes the application of the MTM algorithm to HRG data for land cover mapping. Finally, a general conclusion detailing satellite imagery classification improvements using Kohonen map is proposed, and an overview of possible continuations of the methodology is presented.


2. Methodology

The amount of satellite data available is very large (more than several tenth of millions of pixels are available each day with low resolution sensors), and thus, statistical methods are well adapted to develop automatic classification of remote sensing observations. On the other hand, the amount of in situ data (ground truth) which could be used as target data for a supervised learning is very small, meaning not representative, compared to the amount of available input data (remote sensing pixels). Therefore, applicable statistical tools are unsupervised, and in-situ data are to be used as test (or validation) data.

Because Kohonen maps are unsupervised algorithms, and moreover because they preserve the genuine space topology (close neurons on a Kohonen map represent close data according to the dissimilarity that is used during the learning phase), they are good candidates to address remote sensing data classification. Indeed, topology conservation is essential to facilitate the a posteriori physical interpretation of the results, like neurons identification. As an example, it allows propagating labels to neighbourhood on the Kohonen map (automatically or not [12]). Another advantage of Kohonen maps is that they are adapted to multivariate non Gaussian distributions (like we have in remote sensing data ensembles), and they allow the simultaneous process of quantitative and qualitative variables [10].

Our methodology can be described by three steps: the first step is the data ensembles preparation for learning, labelling and testing a Kohonen map, the second step is a Kohonen map learning and labelling phase, and the third step is the application of a calibrated Kohonen map to satellite images.

Indeed, the first step corresponds to posing the problem, i.e. encoding the variables to improve the performance of the targeted classification. This encoding may thus be different from a problem to another.

The second step realisation (the learning phase) also may differ from a problem to another because some choices have to be done concerning learning algorithm and parameters (as an example do we proceed the calibration of the map with or without a priori label, do we choose a probabilistic map or not, which dissimilarity shall we use etc.), concerning Kohonen map architecture (typically the size of the map), and last concerning the method to label the map, depending on the quantity and the nature of the available a priori knowledge.

The third step of the method (the application of a calibrated Kohonen map to classify pixels), may be completed by an image processing algorithm to arrange the results if they are too noisy, which may be the case for high and very high resolution optical images or SAR imagery. In this case, the radiometric value of a pixel is replaced by the value of its corresponding neuron weights prior to the image processing application, in order to simplify the problem.

In paragraphs 3 and 4, these three steps are detailed in the case of our two use-cases: cloud detection and land cover mapping using optical imagery.

2.1. Data preparation

In the case of optical imagery, if exception is made of future European Spatial Agency (ESA) Sentinels missions, the lower spatial resolution, the less available spectral bands, for evident reasons of data volumetry management. Typically, HR sensors are radiometers equipped with wide band-pass filters from the green (G) to the near infrared (NIR) and are referred to as broadband radiometers (BBR), while LR sensors are radiometers equipped with about ten narrow band-pass filters (sometime more) from the blue (B) to the short-wave infrared (SWIR), and are referred to as multispectral imagers (MSI). Moreover, a very few MSI like PARASOL [13] and MISR [14] are multidirectional and polarised, leading to several tenth of variables by pixel.

As our methodology is general and applicable to both BBR and MSI, we decided the following: each pixel will be represented by all its available spectral information, but avoiding the bands that are sensitive to gas absorption, which are too sensitive to atmosphere content. A data is thus an optical pixel defined by several spectral bands, in our case irregularly distributed from the blue to the SWIR. No geometrical information (solar and viewing angles, solar and satellite azimuth angles) is taken into account in the definition of a data vector, but irradiances (e.g. in W. m-2. sr-1. nm-1) are transformed into reflectances (dimensionless) prior to further computations. Moreover, in order to improve classification quality, we completed the available spectral information with new variables, created from reflectance data with various kinds of transformations.

Red (R), green (G) and blue (B) bands were converted into other colour spaces:

  1. The CMYK (Cyan, Magenta, Yellow and Key (black)) space is the colour model used in colour printing. The black component is used to increase the black density which can be obtained by addition of magenta, cyan and yellow.

  2. The HSV colour model can be seen as a cylinder space of three components: Hue gives the colour in degree, Saturation corresponds to the colour intensity in percent and Value is the colour brightness in percent.

  3. The CIE L*a*b* (more often called Lab) space have been created by the CIE (Commission Internationale de l’Eclairage) so that distances between colours are closer to colour differences for human eyes. The L* variable indicates the lightness in percent. The colour is explained by two axes: between green (negative) and red (positive) in the component a* and between blue (negative) and yellow (positive) in the component b*.

Moreover, we applied a gamma correction on RGB bands to increase their contrasts [15]. This transformation consists in taking the puissance of the corresponding channel. We used as exponentiation the value 0.45, generally used for sRGB space. This kind of correction is currently used to adjust screen or camera.

Some physical indexes were created from spectral bands, like normalized difference indices. They are commonly used by scientists to detect different kind of earth surfaces and their temporal variations:

  • The Normalized Difference Vegetation Index (NDVI) is the most common vegetation index used. It is created from near-infrared (NIR) and red (R) bands and used by physicists in order to control the vegetation variation :

  • The Normalized Difference Snow Index (NDSI) [16] is made from green (G) and mid-infrared (MIR) bands. The difference between visible and thermal bands allows us discriminating snow and cloud :

  • The Normalized Difference Built-up Index (NDBI) [17] is created from mid-infrared (MIR) and near-infrared (NIR) bands. This index enables distinguishing urban areas :


The shape of the spectrum produced for each pixel by spectral bands reflectances is a very useful indication. Each type of earth surface has its own shape. In order to add this information in the database, we fitted the spectrum of each pixel by a polynomial approximation using wavelengths values. Depending on the number of spectral bands of the studied sensor, we used a three or four degree for the polynomial. The calculation of the spectrum derivative by a simple polynomial derivative is also possible but, in our case, adding these variables does not allow to improve the classification. Polynomial coefficients are additional information and they are added in the database.

Some qualitative variables may also be added to the variables vector, like information about the observed background or the weather, if they are available. They are used in order to interpret our classification results by comparison to these existing qualitative image masks. If a mask concerning the targeted classification is available (which is the case for MODIS, because a cloud mask is distributed as a Level 2 product), this mask is taken as a priori label, and it is added to the data vector by the way of integer variables.

All variables were independently normalized between 0 and 1 in order to ensure that they have the same weight during the Kohonen learning phase, except of the integer variables (e.g. a priori labels).

We then posed the problem by taking the imagery pixels as independent data, which, on a local spatial scale is false, but by sub-sampling images on the full Earth become true. Pixels for which one variable is missing are not put in the databases.

For the learning database constitution, that must be representative of the problem, we decided to equally represent the different observable land covers, in order to avoid under representation of certain geophysical cases. Indeed, choosing a natural sampling of Earth would lead to a massive representation of water pixels in the data distribution. Moreover, each type of forest, crop, field, desert, and city are equally important from a physical point of view. In the case of cloud detection, we did the same according to the weather, by collecting the same pixel number for each kind of clouds on each earth surface type.

We also constituted a labelling database, smaller than the learning database, in which a priori knowledge is equally represented considering the targeted classes. This database will be used to control or to create the neurons labels, depending on which Kohonen map algorithm will be used during the learning phase. It is constituted by pixels taken from as many identified areas as desired classes, as examples: urban area, forest area, crop area etc.

Finally, the testing database is constituted by all the data that has not been used for the learning database or the labelling database constitution.

2.2. Calibration and labelling of a Kohonen map

First step of our methodology is the Kohonen map calibration using the learning database. As already stated, we chose to use the MTM algorithm [10]. This algorithm is a classical topological map algorithm in which the use of binary variables is allowed. This is done by declaring which variables from the input vector are continuous and which ones are binary, and then applying the Euclidian distance to the continuous variables and the Hamming distance to the binary variables. The cost function is thus constituted by the sum of two parts: one is the classical cost function used by the Kohonen Batch algorithm [18] and is based on the continuous variables (this cost function is denoted Esom in the following); the other part is the cost function used in the denoted BinBatch algorithm [19] and is based on the binary variables (denoted Ebin).

For the learning phase, the classical Kohonen iterative scheme is applied, following two steps: the assignment step and the optimisation step. The assignment step consists in associating, for each iteration, each data of the learning database to its closer neuron considering a current neighbourhood function (which size decreases at each iteration), using a ponderated sum of the Euclidian distance and the Hamming distance. The optimisation step consists in calculating the new value of the weights of each neuron using the input vectors assigned to it. The neurons weights are defined as the concatenation of the mean of the inputs continuous part, and the median of the inputs qualitative part. Iteratively performing the assignment and the optimisation steps leads to the minimisation of Esom and Ebin.

In the case of cloud detection, because a priori labels were available, the MTM algorithm was used to calibrate a Kohonen map, using binary masks completing the radiometric information, and giving an equivalent ponderation to the radiometric information and the binary masks. But in the case of the land cover mapping, no a priori knowledge was available, thus the MTM code was applied without binary part in the input vector, which means that we used a classical Kohonen algorithm only. For both cases, we use a exponential neighbourhood function.

Second step of our methodology is the Kohonen map neurons labelling, according to the targeted classes of each problem. If a priori labels are present in the learning database, the MTM algorithm automatically labels each neuron by majority vote. If no a priori labels are present in the learning database, we project the labelling database on the Kohonen map and label the neurons by majority vote.

To finalise the resulting classification, we have to control the learned label and regroup them in order to match the desired classes. A control is performed on images which have not been used for the labelling (testing data set). Controls of the labels are applied on images and with respect to the topology of the Kohonen map. One label must correspond to neighbour labelled neurons on this map. If neurons are unidentified, they may be corrected manually or automatically (as example by hierarchical clustering [12]).

2.3. Operation of a Kohonen map

Once calibrated and labelled on a representative learning data set, the Kohonen map may be used in an operational mode, i.e. using only the assignment algorithm in order to associate a winning neuron, and thus a winning label, to each new pixel presented to the map.

This process is sufficient in the case of LR imagery in order to obtain smooth and easy to interpret classified image masks, superimposable on the processed satellite product. But it may be insufficient in the case of LR imagery, because in this case images have much more high frequencies in their radiometric values, and thus the result of the Kohonen map labelling may seem noisy to a human eye.

In the case of LR imagery only, we thus decided to apply image processing to achieve the process of pixel classification. It is also needed because of the radiometric high frequencies values, but it is also justified because of the local spatial pixel correlation that may be very high in LR imagery (and is much smaller in HR imagery). A lot of algorithms may be used to complete or correct the first classification result given by a Kohonen map: from simple local filters computed on a local window like mean or median filters, to more sophisticated image processing solutions like growing region algorithms [20].

In this study, to achieve the classification, we simply chose to perform a majority vote between labels on a local window, because we wanted to avoid costly algorithms in calculus time. We thus do not use any radiometric information to perform this post-processing. But if we had chosen some real image processing algorithm, we would have replaced each pixel radiometric value by its winning neuron weights value before perform the image processing scheme. By this way, we would have simplified the problem (by removing radiometric highest frequencies) and gain calculus and scientific performance on the resulting classification. In other study cases, we perform this simplification prior to change detection.

Figure 1.

RGB composite image from MODIS data, North of Great Lakes, Canada.


3. First use-case: Cloud detection

The determination of the presence of global cloudiness is essential because clouds play a critical role in the radiative balance of the Earth and must be accurately described to assess climate and potential climate change. Moreover, the presence of cloudiness must be accurately determined to retrieve properly many atmospheric and surface parameters. For many of these retrieval algorithms even thin cirrus represents contamination.

Visible and infrared window threshold approaches are largely used in the cloud detection because they allow separating easily bright, dense, and cold surfaces. However, some surface conditions such as snow covers, ice cover, bright desert surfaces, or even salty areas could be detected as clouds by these classical tests. Moreover, some cloud types such as thin cirrus, low stratus at night, and small cumulus are difficult to detect because of insufficient contrast with the surface radiance.

In brief, cloud detection is a difficult and recurrent problem when using a sensor which does not provide information in the thermal infra-red. Classical threshold methods on reflectance data are insufficient, even though a lot of attempts have been made to create decision trees depending on latitude, seasons and observed surfaces [21], because the detection problem is continuous and non linear if all kind of Earth surfaces and all types of cloud optical thicknesses are taken into account.

We thus decided to challenge one of these classical algorithms by a Kohonen map used for classification. MODIS sensor has been chosen because of its important number of spectral bands (eight for land observation from visible to mid infrared) and because of the availability of validated, but not perfect, cloud detection masks deduced from a decision tree [8]. In order to calibrate the MTM algorithm on the entire Earth, fourteen sites were chosen everywhere on the planet. For each site, five satellite images (MODIS Calibrated Radiances Level 1B products, MYD021KM collection 5.1) were selected at different seasons for their balanced cloud proportion (you can see an example on Figure 1).

3.1. Data preparation for cloud detection on MODIS data

Multiple Level 2 cloud masks (MYD35_L2 collection 5) were combined to have five “weather” labels (clear sky, thin clouds, thick clouds, shadows and indeterminate pixels) and six background labels (water, coastal, glitter, snow/ice, land and desert) (Figure 2). These labels allowed the random selection of the same amount of pixels by background type and weather type (about 100 000 per combination), avoiding indefinite weather pixels, leading to a learning database containing a total number of two millions pixels.

The learning and testing databases were built using the eight spectral bands, with a gamma correction for RGB bands. Moreover, we decided to add new colour spaces: Lab, CMYK and HSV. As H component of HSV is a circular angle between 0° and 360°, it had been separated into two variables corresponding to its sine and cosine. A polynomial regression of degree four was applied for each pixel of the database and we added their polynomial coefficients. We also computed the NDVI index in order to improve the discrimination efficiency between cloud and snow or ice.

First, we performed a Linear Discriminant Analysis using the four weather labels to control the linearity of the problem. It clearly appeared that linear models cannot discriminate thin clouds from thick clouds. During this descriptive analysis of the database, we also studied the correlation matrix of variables. It seemed that some components of new colour space are correlated, such as K component of CMYK and L component of Lab. Then, we computed another Linear Discriminant Analysis deleting correlated variables and using only two classes: clear weather (regrouping clear sky and shadows) and cloudy weather (regrouping thin and thick clouds). Results were better than the first one but not enough discriminant. It was then justified to attempt the use of MTM algorithm in order to distinguish several cloud optical depths.

As a result of this prior study, we decided to put the cloud probability result obtained by the second Linear Discriminant Analysis in the input vector constituting learning and testing databases of the Kohonen map, as a new pre-processing variable. We also added two qualitative variables to the input vector: weather labels and background labels, in order to take advantage of the MTM algorithm binary variable processing.

Figure 2.

Example of background labels (on the left) and weather labels (on the right) created from MODIS masks. White pixels correspond to non available data. North of Great Lakes, Canada.

3.2. Calibration and labelling of a Kohonen map for cloud detection on MODIS data

The MTM algorithm was then applied to the learning database, in order to calibrate a 20 by 20 neurons map. After a short study of the first results, it was decided to make the following changes:

  • coastal pixels were removed from the database, because they were radiometrically unstable,

  • glitter pixels were changed to water pixels, because they correspond to a particular case of water observation,

  • shadows were declared as clear sky, because they correspond to a particular case of clear sky observation,

  • For thick clouds, there was no distinction between backgrounds, that’s why we created a new background for them (equivalent to undetermined background).

Figure 3.

Example of cloud detection made by the MTM algorithm, two optical depths thresholds: “thick” in red, “thin” in yellow, (black pixels correspond to non available data). North of Great Lakes, Canada.

After these modifications, a new 20 by 20 neurons map was produced with pre-classified neurons by the MTM algorithm. In order to correct the map (and at the same time the MODIS mask), three test images were chosen with different backgrounds and under different latitudes: an image of Great Lakes in Canada, to see the difference between snow/ice and cloud (top of Figure 3), another image on Sahara to discriminate cloud with sand, and the last image on Mediterranean Sea with glitter. For each neuron, we studied corresponding pixels on the three images, and corrected its label when necessary, respecting the Kohonen map topology (see paragraph 2.2).

3.3. Operation of a Kohonen map for cloud detection on MODIS data

Using confusion matrices between MODIS Level 2 cloud mask and the MTM results computed on the test database (in operation mode only), we showed that we entirely retrieved MODIS level 2 mask results in confident cases and improved it on indefinite cases. As it can be seen on Great Lakes image, undefined pixels on water appear as clear sky and our cloud detection classification makes less error on snow and ice. More precisely, if we compare Figure 2 –right and Figure 3, to Figure 1, we can see that fewer pixels are undefined in the case of presence of snow, and that fewer pixels are over-detected as clouds. This is confirmed when computing confusion matrices on this image (Tables 1 and 2), where we can see that 11.7% of the pixels are declared as thick or thin clouds by the MODIS mask but declared as clear sky by the MTM algorithm, and 28.5% of the pixels are declared as undefined by the MODIS mask although declared as clear sky by the MTM algorithm (Table 1).

Results of the MTM algorithm
 Clear skyThin cloudThick cloud
MODISClear sky33,1040,0220,000
cloudThin cloud6,2771,3645,484
maskThick cloud5,3973,24116,315

Table 1.

Confusion matrix (distribution of pixels in %) comparing results of the MTM algorithm after correction and MODIS cloud mask on the image of North of Great Lakes, Canada.

Table 2 shows the confusion matrix computed between the two algorithms on pixels declared as snow by the MODIS background mask. We can see that all snow pixels are declared as clear sky by the MTM algorithm, while 45.5% of them were declared undefined by the MODIS cloud mask.

We can note by the examination of these matrices that we do not have the same definition of thin cloud between MODIS and MTM classification, because we did not chose the same corresponding cloud optical thickness. On the Great Lakes image, 5.5% of the pixels declared as thin cloud by MODIS are declared as thick cloud by the MTM algorithm.

3.4. Generalisation to other sensors

Some other sensors have their own cloud masks too, that could be used to perform the same kind of study, but they are made by threshold values using generally fewer spectral bands than MODIS, leading to lesser quality reference cloud mask. In order to create cloud detection masks for other sensors, we thus decided to create pseudo-sensors (Landsat [2], Pleiades [3], Sentinel 2 [22], etc.) using the learning and testing MODIS databases, keeping only MODIS closest spectral bands to the targeted sensors. As an example, learning and testing databases containing the bands 1, 2, 3 and 4 of a pseudo HRG sensor were constituted using the MODIS bands 4, 1, 2 and 6, respectively.

Results of the MTM algorithm
 Clear skyThin cloudThick cloud
MODISClear sky54,4590,0090,000
cloudThin cloud0,0050,0000,000
maskThick cloud0,0020,0000,000

Table 2.

Confusion matrix (distribution of pixels in %) for pixels labelled as snow or ice by MODIS mask, comparing results of the MTM algorithm after correction and MODIS cloud mask on the image of North of Great Lakes, Canada.

Figure 4.

On the left: RGB composite image of SPOT 5 HRG Level 1B data, On the right: results of the MTM algorithm calibrated using pseudo HRG data and applied to SPOT 5 HRG real data (in orange: thick clouds, in red: thin clouds and in blue: clear sky), Reunion Island, France.

We logically showed that the lesser the bands, the more difficult to discriminate clouds from bright targets. But the performance is decreasing more rapidly when using classical linear algorithms; we thus still encourage the use of Kohonen maps, even when less multi-spectral information is available (Figure 4).


4. Second use case: Land cover mapping

Human actions have a negative impact on biodiversity. Construction of roads and urban expansion fragment living environment of wildlife. Some species need a minimum area to live and the reduction of their habitats can lead to their extinctions. In order to decrease these effects, some infrastructures help wildlife to overpass or underpass roads such as amphibians’ tunnels and animals’ bridges. Satellite data can help governments to detect urban and natural surfaces from the sky and see if natural areas are sufficiency vast and connected. Moreover, land cover mapping has a lot of applications in the sector of urbanism; we thus decided to test the MTM algorithm [10] on this problem in order to verify its applicability to HR data processing.

4.1. Data preparation for land cover mapping on HRG data

We chose to use SPOT 5 (Satellite Pour l’Observation de la Terre) HRG Level 1B images at 20 metres of spatial resolution [11]. Only four spectral bands (at 545 nm, 645 nm, 885 nm and 1.665 nm) are available. The HRG spectral information is thus very different from the MODIS one. Moreover, unlike the MODIS use-case, we did not have any qualitative information to help us to label the data. The database was generated from zooms on seven cities in France and their surrounding landscape (three cities around Bordeaux, three cities on Reunion Island and three zooms on different sector in Toulouse) at different seasons and latitudes. These zooms were made in order to have as many urban pixels as natural pixels on each of them.

Because there are only four spectral bands, adding new variables like physical indexes is very important (please refer to the Methodology chapter). For this specific problem, we choose to add to the database the NDVI, the NDBI, the NDSI, polynomials coefficients computed on a three degree polynomial, and a gamma correction was applied on green and red bands (see paragraph 2.1). As no blue band is available on HRG sensor, we could not create new colour spaces. Thus, the database was made of eleven variables when we had only four reflectances in entry.

In order to constitute the learning database, about four millions pixels were randomly chosen and controlled in order to have the same amount of pixels in each town and each type of vegetation from the nine chosen images.

In this study, no labels are available for HRG pixels, we thus created the labelling database from available images: four labelled sub-databases, corresponding to the four backgrounds expected to be distinguished (water, city, crop and forest), were created by several little zooms on images. However, this labelled database was not entirely consistent; as an example, some gardens and trees could be found on little zooms made on city, despite all of our attention.

4.2. Calibration and labelling of a Kohonen map for land cover mapping on HRG data

The MTM algorithm was applied on the database on a 20 by 20 neurons square map, with 100 learning iterations. Because no validated a priori knowledge was available concerning the targeted classes, no qualitative variables were used during the learning phase. The labelling database was used to label the Kohonen map by majority vote after the learning phase (please refer to the Methodology paragraph), but because this database was not completely consistent, we kept in mind that it only gave us a first idea of the Kohonen map labelling.

Figure 5.

Kohonen activation map obtained by majority vote (on the left) and the same activation map after correction on the image of Toulouse (on the right).

Figure 6.

RGB composite image (B4, B3, B2, respectively) of SPOT 5 HRG data, May 21st 2011, Toulouse, France.

After the projection of the labelling database on the map, only one neuron could not be clearly attributed to a class and had been flagged as “indefinite” (Figure 5 - left). In order to correct the Kohonen map, a visual rectification was made with the help of three representations of Toulouse: the RGB image computed with the original HRG data (RGB composite image from bands B4 B3 B2, of May 21st 2011, Figure 6), Google Maps website and the French website Géoportail created by IGN (Institut Géographique National) and BRGM (Bureau de Recherches Géologiques et Minières). It appeared that a new class, distinguishing bare soil, had to be taken into account. This new class was added to the corrected Kohonen map (Figure 5 - right). Results of Kohonen classification before and after the map correction are shown on Figure 7 (on the Toulouse image).

4.3. Operation of a Kohonen map for land cover mapping on HRG data

During the Kohonen map correction, we decided to favour crop, water and forest from city pixels and then to add a new step, consisting in a sliding window filter, favouring city pixels (please refer to the paragraph 2.3). After a sensitivity study of the window length and of classes’ priority, two window filters were applied:

  • A first 7 by 7 window filter had been used on classes image of Toulouse with the following algorithm: if one pixel of the window is of class town: pixel is town, else if one pixel of the window is of class water: pixel is water, else a majority vote on the window was made between forest, crop and bare soil.

  • Then, a second 7 by 7 window filter had been applied with a majority vote without class priority.

Figure 7.

Visualisation of the activation map obtained by majority vote, before (left) and after correction (right) of the neurons labelling. May 21st 2011, Toulouse, France.

These filters allowed us to correct Kohonen maps classification taking into account spatial information (see Figure 8). In order to finalize this product for scientists and governmental administrations, we will have to add real geolocated information to this image, such as roads and already done infrastructures. Indeed, on this interpreted SPOT 5 HRG image of Toulouse, we can distinguish highways, but they do not appear continuous. Other roads are not visible because of the resolution of the sensor. We will have to reconstruct them with the help of a geolocated map of roads, and using different information depending on the targeted product.

As an example, if we target environment control, we will have to include animal bridges which can appear as discontinuities in the road and have to be kept as animal movement possibilities. Moreover, we need to have information about existing wildlife crossing which are not visible by satellite such as amphibians’ underpass tunnels. On the other hand, bridges detected over rivers are real road detections. This also means that post-processing will not be the same depending on map application. For example, road bridges are not useful for fishes study.

Results were visually validated on Google Earth cartographic background, showing a slight over-detection of urban structures (Figure 9). But the method seems to be adapted to perform automatic land cover mapping, which is generally done manually in operational structures today. Indeed, relatively timely stable structures were well retrieved, as rivers, forests and urban down-town buildings.

Figure 8.

Results of the classification after the image processing step, May 21st 2011, Toulouse, France.

In order to finalize this study, the next step will be the geolocation of roads and other infrastructures made for animals. Once this information will be added in the database, the study of the living environment of each type of animals can be done. Indeed, we also have an algorithm able to reproduce animals’ moves from a starting pixel, which can be used in order to visualize their displacements for a chosen kind of land cover. Moreover, this methodology needs to be validated on another city.


5. Conclusion

Application of Kohonen map to satellite cloud detection and land cover mapping problems has been presented, as an alternative to classical decision trees algorithms based on linear tests (e.g. thresholds applied to radiometric information). General methodology using the MTM algorithm [10] is explained (paragraph 2) and then detailed for the two use cases, which correspond to low resolution (LR) imagery and high resolution (HR) imagery, respectively (paragraphs 3 and 4).

Results show that in the case of cloud detection, the use of the MTM algorithm improves significantly the detection results over bright targets as snow, ice, and deserts, reducing false detections and indetermination, when comparing the MTM results to the operational MODIS Level 2 cloud mask. These results are very important for scientists, as cloud and snow/ice masks are used as one of the necessary inputs for computing the radiative budget of the planet.

Figure 9.

Automatic land cover detection by Kohonen map (city in black, bare soil in yellow, crop in light green, forests in dark green, water in blue): validation by projection on Google Earth. Toulouse, France.

In the case of land cover mapping, results show that a fully automatic algorithm may be used to rapidly interpret HR imagery, producing instantaneous cartographic maps for a wide range of applications, from environment survey to urbanism. The classification results, interpreted as number of pixels of each targeted class in the image, may also be used to index the images in voluminous satellite databases, improving the extraction of useful information from them. Moreover, because Kohonen map neurons correspond to radiometric information, they may also be used to radiometrically simplify voluminous HR images prior to further image processing, like change detection as we propose in paragraph 2.3, or even prior to save them in satellite databases for volumetry economy purpose. This would be done by saving only the image of the winning neurons of each pixel (which would be one integer image and not about a tenth of radiometric bands), associated to the table of corresponding radiometric vectors.

Finally, because Kohonen maps are simple solution for multivariate classification, we think it will allow the fusion of optic and SAR imagery, in the case of well geographically co-registered missions.



The MODIS L1B data were obtained through the online Data Pool at the NASA Land Processes Distributed Active Archive Center (LP DAAC), USGS/Earth Resources Observation and Science (EROS) Center, Sioux Falls, South Dakota (

We thank the French spatial agency (Centre National des Etudes Spatiales, CNES) Kalideos project ( for SPOT 5 HRG imagery courtesy. The cloud detection study was funded by the CNES Recherche and Technology program, Observation de la Terre axis n°4 (Méthodes pour l'Extraction d'Informations des Images).


  1. 1. European Space Agency2011Medium Resolution Imaging Spectrometer (MERIS) Product Handbook3
  2. 2. NASA’s Earth Observing System2012Landsat 7.
  3. 3. Centre National d’Etudes Spatiales2006Main characteristics of the Pleiades mission
  4. 4. Deutsche Forschungsanstalt für Luft- und Raumfahrt2007TerraSAR-X Ground Segment, Level 1b Product Format Specification1.3 (TX-GS-DD-3307)257
  5. 5. Earth Observation Research Center, Japan Aerospace eXploration Agency1997ALOS user Handbook (NDX-070015)
  6. 6. Agenzia Spaziale Italiana2007COSMO-SkyMed System Description & User Guide, Rev. A (ASI-CSM-ENG-RS-093-A)49
  7. 7. KerrY.WaldteufelP.WigneronJ. P.FontBerger.2003The Soil Moisture and Ocean Salinity Mission IGAARS 2003, Toulouse
  8. 8. AckermanS.StrabalaK.MenzelP.FreyR.MoellerC.GumleyL.BaumB.WetzelSeemann. S.ZhongH.2006Discriminating clear sky from cloud with MODIS. Algorithm theoretical basis document MOD35129
  9. 9. PELCOM project2000Development of a consistent methodology to derive land cover information on a European scale from remote sensing for environmental modelling.PELCOM FINAL REPORT- DGXII. Editor C.A. Mücher299
  10. 10. LebbahM.ChazottesA.ThiriaS.BadranF.2005Mixed Topological Map, ESANN 2005 proceedings-European Symposium on Artificial Neural Networks. Bruges, April26-29357362
  11. 11. © ASTRIUM-© Cnes 2004-201020112012SPOT: accuracy and coverage combined
  12. 12. NiangA.GrossL.ThiriaS.BadranS.2003Automatic neural classification of ocean colour reflectance spectra at the top of the atmosphere with introduction of expert knowledge. RSE86257271
  13. 13. BreonF. M.BuriezJ. C.CouvertP.DeschampsP. Y.DeuzeJ. L.HermanM.GoloubP.LeroyM.LifermannA.MoulinC.ParolF.SezeG.TanreD.VanbauceC.VesperiniM.2002Scientific results from the POLarization and Directionality of the Earth’s Reflectances (POLDER). Adv. Space Res.301123832386
  14. 14. JovanovicV.MillerK.RheingansB.MoroneyC.2012Multi-angle Imaging SpectroRadiometer (MISR) Science Data Product Guide (JPL D-73355)
  15. 15. PascaleD.2003A Review of RGB color spaces...from xyY to R’G’B’,1719
  16. 16. SalomonsonV. V.AppelI.2006Development of the Aqua MODIS NDSI Fractional Snow Cover Algorithm and Validation Results.IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING447
  17. 17. XuH.2007Extraction of Urban Built-up Land Features from Landsat Imagery Using a Thematic oriented Index Combination Technique, Photogrammetric Engineering & Remote Sensing731213811391
  18. 18. KohonenT.1994Self-Organizing MapSpringerBerlin
  19. 19. LebbahM.ThiriaS.BadranF.2000Topological Map for Binary Data, Proceedings of ESANN 2000, Bruges262728
  20. 20. BaatzM.SchäpeA.2000Multiresolution Segmentation: an optimization approach for high quality multi-scale image segmentationJournal of Photogrammetry and Remote Sensing583-4
  21. 21. HeidingerA. K.AnneV. R.DeanC.2002Using MODIS to estimate cloud contamination of the AVHRR data-record.Journal of Atmospheric and Oceanic Technology19586601
  22. 22. European Space Agency2010Sentinel 2 payload data ground segment, product definition document (GMES-GSEG-EOPG-TN-09-0029).

Written By

Suzanne Angeli, Arnaud Quesney and Lydwine Gross

Submitted: March 22nd, 2012 Reviewed: July 5th, 2012 Published: November 21st, 2012