Open access peer-reviewed chapter

Spatial Change Recognition Model Using Image Processing and Fuzzy Inference System to Remote Sensing

Written By

Majid Mirbod

Submitted: 08 June 2022 Reviewed: 10 November 2022 Published: 19 December 2022

DOI: 10.5772/intechopen.108975

From the Edited Volume

Intelligent Video Surveillance - New Perspectives

Edited by Pier Luigi Mazzeo

Chapter metrics overview

66 Chapter Downloads

View Full Metrics

Abstract

After the advent of satellites whose job is to image the surface of the earth, a huge database of imaging data of the surface of the earth was made available to researchers in various sciences to exploit a large data set in their field of work, and the subject of remote sensing gradually came to the attention of researchers in various sciences. For example, geography, environmental science, civil engineering, etc., each analyzed the visual data of the earth’s surface from the perspective of their field. According to this research, the issue of spatial change recognition and their location and calculating the percentage of changes at the ground level has been considered, and the model presented is based on machine vision, image processing, and a fuzzy interface system to reveal features. This research is in the category of applied research and finally, an application will be presented that can lead to the development of software such as Google Earth and can be added to that as an option. Another of the advantages of this model is its easy use compared to specialized software such as Arc GIS, and this is the novelty of this research.

Keywords

  • fuzzy interface system
  • spatial change recognition
  • remote sensing
  • image processing
  • remote sensing application

1. Introduction

This paper presents a spatial change recognition model using satellite images, image processing, and a fuzzy interface system as a remote sensing application, and it is in the applied research category. Change recognition in natural phenomena is very important for managing and preserving the environment. On the other hand, video data collection satellites have been able to produce large volumes of images from the surface of the earth and provide them to researchers, such as Google Earth. But the images taken by the satellites show the state of the earth at the time of the image capture, and to get more information about whether or not there has been a change in that area, it is necessary to compare exactly the previous images there. So, change recognition in the study area can give an idea of how to manage and control the environment in that area. Also, change recognition in remotely sensed images is an active research area [1]. Remote Sensing on the Earth’s surface and change recognition mean change detection on the Earth’s surface by processing images of the same geographical area acquired at different times. This field can include forest or vegetation change, forest mortality, defoliation, and damage assessment, wetland change, urban expansion, damage assessment, crop monitoring, changes in glacier mass balance, environmental change and deforestation, regeneration, and selective logging [2]. So, briefly; we refer to some past research in this regard. For example: “Land cover change detection using GIS and remote sensing techniques: A Spatio-temporal study on Tanguar Haor, Sunamganj, Bangladesh” [3], that using a classification area model. Or, “Automated unsupervised change detection technique from RGB color image”, that uses coefficients correlation calculation between color signatures of each two associated pixels from two satellite images for the same area [4], The change detection model has to be insensitive to illumination brightness changes [5], and for this reason, in the proposed model, we use high-resolution grayscale images to prevent changes in light and brightness in satellite images in calculating spatial changes. Or, “Change detection in the city of Hilla between 2007 and 2015 using Remote Sensing Techniques”, that using ArcGIS 10.4 software and those processes include, geometric correction, spectral enhancement, image classification, and cartographic output [6], as mentioned, one of the advantages of this model, that is its easy use compared to specialized software such as Arc GIS, and this is the novelty of this research. Or understanding patterns of vegetation change at the Angkor world heritage site by combining remote sensing results with local knowledge based on analysis stages used to extract spectral plots of pixel values in the region of interest [7], or automatic change detection of buildings in an urban environment from very high spatial resolution images using existing geo-database and prior knowledge based on the Image segmentation model that result of the ratio of the detected area was 86–90% [8], or, On a survey of change detection methods based on remote sensing images for multi-source and multi-objective scenarios was expressed, at present, spatial change recognition as a remote sensing application manifests great significance in numerous change detection applications that are shown on the chart below (Figure 1) [9].

Figure 1.

Published literature statistics of urban change detection according to the keywords remote sensing and urban change detection in Web of Science (total of 1283 publications) [9].

Another study entitled: change detection of soil formation rate in space and time based on multi-source data and geospatial analysis techniques, estimate the dissolution rate and soil formation rate in karst areas of China and analyzed their spatial diversity has been done [10], or in another study, change detection techniques based on multispectral images for investigating land cover dynamics using image processing and mining have been investigated [11]. In another study entitled: Change detection techniques for remote sensing applications, about the distribution of change detection methods [12].

Another study using an images enhancement method, improve the accuracy of SAR1 images to change detection [14]. Or, in another study, based on similarity measurement between heterogeneous images, change detection has been done [15]. Or based on the maximum entropy principle to obtain the final change detection map and compare with the wavelet-based textural features, plain texture difference, image difference, and log-ratio methods change detection has been done [16].

Advertisement

2. Materials and methods

2.1 Materials

In this research, we want to use publicly available data without the need for special software or special knowledge to collect data and test the proposed model with it. Google Earth is one of the most popular and widely available applications. And since the model is universal and the quality of the images is enhanced by using image processing techniques, it is enough to introduce two images with exactly the same spatial characteristics to the model in two different periods or to display and calculate their differences. So, for example, we take pictures of some different places in the world using the time change feature in Google Earth software and test them with the proposed model. A very important point is that the location specifications are exactly the same in capturing images, which Google Earth software has this feature. So that without changing the desired location, it is enough to change the timeline and capture two images of a place at two different times. Here are examples of images Acquisition from Google Earth from a fixed location at two different time intervals (Figures 25).

Figure 2.

A: Greenland 1930, B: Greenland 2021, source and specifications of images: NOAA, US Navy, NGA, GEBCO, Landsat, 74,22′50,50″ N 45,02′01,17″ W, elev 2811 m, height: 3601.47 km.

Figure 3.

A: Jumeirah Palm beach 2000, B: Jumeirah Palm beach 2000, source and specifications of images: NOAA, US Navy, NGA, GEBCO, Landsat, 25,06′42,80″ N 55,03′43,93″ E, elev 10 m, height: 38.26 km.

Figure 4.

A: Oroomiye Lake 1984, B: Oroomiye Lake 2017, source and specifications of images: NOAA, US Navy, NGA, GEBCO, Landsat, 37,08′58,85″ N 45,12′31,05″ E, elev 2318 m, height: 157.76 km.

Figure 5.

South Pole 1957, B: South Pole 2022, source and specifications of images: NOAA, US Navy, NGA, GEBCO, Landsat, 83,32′58,95″ S 62,22′15,95″ E, elev 3178 m, height: 7734.93 km.

Similarly, any other location can be imaged and compared at two intervals, and the proposed model has no limitations, it is enough to keep the exact location specifications and the camera does not change in terms of geographical coordinates or height.

2.2 Methods

The basic model we use to change recognition in images has previously been used to detect changes in industrial parts, where local imaging was performed by the camera and was a type of micrography [17]. The input of the model is the images prepared in the previous section, the type of which was described. In Figure 6, the spatial change recognition model for remote sensing is shown.

Figure 6.

Spatial change recognition model to remote sensing.

2.2.1 Description of model components

2.2.1.1 Acquisition of the first and second spatial image

The source for the acquisition of satellite spatial images with the history is the Google Earth application.

2.2.1.2 Prepare spatial images data

We used the MATLAB image processing toolbox to image mining included, convert RGB images to grayscale and converting the intensity image to double (Pre-processing and data preparation stage), and implemented other parts of the model. The reason for converting images from RGB to grayscale is reducing the data from three dimensions to two dimensions while simplifying the problem. So, in MATLAB, there is a function called “rgb2gray” is available to convert RGB images to grayscale images that we used [17].

2.2.1.3 Edge detection from spatial images with different techniques

In this part, three different methods have been used to edge detection. The reason for using these three methods is that each has its strengths and weaknesses, thus presenting different results in edge detection, which in total will lead to the model’s strength in edge recognition.

2.2.1.3.1 Fuzzy interface systems to edge detection

Briefly, the fuzzy conditions help to test the relative values of pixels that can be present in case of presence on an edge. So, the image is said to have an edge if the intensity variation between the adjacent pixels is large. The mask used for scanning the image is shown in Figure 7 [17].

Figure 7.

Define FIS for edge detection (the first and second spatial image). (fuzzy inference system).

Gx=11,Gy=Gx'

The mask is slid over an area of the spatial image and changes that pixel’s value, and then shifts one pixel to the right and continues to the right until it reaches the end of a row. It then starts at the beginning of the next row and the process continues till the whole image is scanned. When this mask is made to slide over the image, the output is generated by the FIS based on the rules and the value of the pixels [17]. In summary, the steps for using a fuzzy inference system are as follows: a) Crisp spatial images for fuzzified into various FS, having conventional crisp membership functions i.e. Black and White. b) Firing strength is calculated using fuzzy t-norms operators. c) Fuzzy rules are fired for each crisp spatial image. d) Aggregate resultant output FS for all fired rules is achieved by using the max operator (s-norm). e) De-fuzzification using the Centroid method. f) The crisp output is the pixel value of the output image i.e. one containing the edges, and black and white regions. g) The first derivative is performed on the image output from FIS after the application of the noise removal algorithm. h) Further refinement is performed by the second derivative and noise removal [17].

2.2.1.3.2 Sobel’s operator to edge detection

The Sobel operator sometimes called the Sobel–Feldman operator or Sobel filter is used in image processing and computer vision, particularly within edge detection algorithms where it creates an image emphasizing edges [18]. Technically, it is a discrete differentiation operator, computing an approximation of the gradient of the image intensity function. At each point in the image, the result of the Sobel-Feldman operator is either the corresponding gradient vector or the norm of this vector. The Sobel-Feldman operator is based on convolving the image with a small, separable, and integer valued filter in the horizontal and vertical directions. The arrangement of pixels is about the pixel [i, j] shown in Table 1. The Sobel’s operator is the magnitude of the gradient computed by:

SX=101202101
Sy=121000121

Table 1.

Masks used by Sobel’s operator [18].

MSX2+SY2
SX=a2+ca3+a4a0+ca1+a6

With the constant c=2.

Like the other gradient operators, Sx and Sy can be implemented using convolution masks:

2.2.1.3.3 Prewitt’s operator to edge detection

Prewitt’s operator uses the same equations as Sobel’s operator, where constant c=1 (Table 2) [19].

SX=101101101
Sy=111000111

Table 2.

Masks used by Prewitt gradient operator.

2.2.1.4 Images comparison with different techniques and error reduction

2.2.1.4.1 The structural similarity index measure

The SSIM2 formula is based on three comparison measurements between the samples, namely the luminance term, the contrast term, and the structural term. The overall index is a multiplicative combination of the three terms [20].

SSIMxy=lxyαcxyβsxyγ,lxy=2μxμy+C1μ2x+μ2y+C1,cxy=2σxσy+C2σ2x+σ2y+C2
sxy=σxy+C3σxσy+C3

Where μx,μy,σx,σy,andσxy are the local means, standard deviations, and cross-covariance for images x, y. If α=β=γ=1 (the default for Exponents), and C3=C2/2 (default selection of C3) the index simplifies to:

SSIMxy=2μxμy+C12σxy+C2μ2x+μ2y+C1σ2x+σ2y+C2

And, dissimilarity Structural is: (1- SSIM(x,y)).

SSIM measures the perceptual difference between two similar images. It cannot judge which of the two is better: that must be inferred from knowing which the “original” and which has been subjected to additional is processing such as data compression.

2.2.1.4.2 Spatial image subtracts

Each grayscale image is a matrix with color code (0–256), so we can subtract the two matrices to compare the difference between the two images, here we also used the MATLAB image expression box command, Z = imsubtract (img1, img2).

2.2.1.4.3 Absolute difference between the two spatial images

Another way to compare the differences between two spatial images is defined as the sum of the absolute difference at each pixel. The difference value is defined as

Dt=i=0MItTiIti

Where “M” is the resolution or number of pixels in the image. This method for image difference is noisy and extremely sensitive to camera motion and image degradation. When applied to sub-regions of the image, D (t) is less noisy and may be used as a more reliable parameter for image difference.

Dst=j=sH/ni=sw/nItTijIt(ij)

Ds (t) is the sum of the absolute difference in a sub-region of the image, where S represents the starting position for a particular region, and n represents the number of sub-regions [21]. There is a function to the MATLAB image processing toolbox to compare two images.

Z=imabsdiffimgaimgb

2.2.1.4.4 Histogram comparison

In this part of the model, the histogram of two spatial images is drawn and compared in a map that compares the color changes between 0 and 256 from black to white in spatial images.

2.2.1.4.5 Error calculation and reduction

To check the error rate in the model, it is necessary to compare two images taken exactly the same in terms of space and time from Google Earth with the model. Therefore, ideally, the model should not show any difference. In other words, since the two images taken are exactly the same, the difference calculated by the model must be absolute zero. The accuracy of error calculation is up to 4 decimal places. A comparison of two exactly identical images is shown below (Figure 8):

Figure 8.

Calculate the error rate with an accuracy of 4 decimal places, image description according to Table 3.

2.2.1.5 Information integration

This part, same a single system that information must be shared across all of the functional areas, and the information collected are integrated [22].

2.2.1.6 Spatial changes recognition

We use the following method to calculate the difference between the two spatial images.

Percentage difference=1SSIM100Error

And also, we will provide a complete map to show the spatial changes along with calculating the percentage of changes.

Advertisement

3. Result

In this section, after introducing the materials and methods, we run the proposed model to obtain the results. Description of the comparative general map in results is shown in Table 3.

Image numberDescription
1spatial image time1, RGB to grayscale
2Actions: Fuzzy interface system on the spatial image time1
3Actions: Prewitt’s operator on the spatial image time1
4Actions: Sobel’s operator on the spatial image time1
5spatial image time2, RGB to grayscale
6Actions: Fuzzy Interface system on spatial image time2
7Actions: Prewitt’s operator on spatial image time2
8Actions: Sobel’s operator on spatial image time2
9spatial image time1 (.jpg) format
10Difference1, between 6&2, show percentage difference: (1-SSIM) *100
11Defference2, between 7,3 - apply operator imsubtract
12Defference3, between 8,4- apply operator imabsdiff

Table 3.

Description of the comparative general map in results.

Experiment 1:

See Figures 913.

Figure 9.

A: Greenland 1930, B: Greenland2021, input spatial image to model.

Figure 10.

Comparative general map, the description of the components is as shown in Table 3.

Figure 11.

Comparison of histograms of two temporal spatial images.

Figure 12.

a: Bar histogram of spatial image in time 1, b: Bar histogram of spatial image in time 2, c: Histogram comparison.

Figure 13.

Histogram compression, above axis: The spatial image in time 2 and below: The spatial image in time 1.

Experiment 2:

See Figures 1418.

Figure 14.

a: Jumeirah Palm beach 2000, b: Jumeirah Palm beach 2000.

Figure 15.

Comparative general map, the description of the components is as shown in Table 3.

Figure 16.

Comparison of histograms of two temporal spatial images.

Figure 17.

a: Bar histogram of spatial image in time 1, b: Bar histogram of spatial image in time 2, c: Histogram comparison.

Figure 18.

Histogram compression, above axis: the spatial image in time 2 and below: the spatial image in time 1.

Experiment 3:

See Figures 1923.

Figure 19.

a: Oroomiye Lake1984, b: Oroomiye Lake 2017.

Figure 20.

Comparative general map, the description of the components is as shown in Table 3.

Figure 21.

Comparison of histograms of two temporal spatial images.

Figure 22.

a: Bar histogram of spatial image in time 1, b: Bar histogram of spatial image in time 2, c: Histogram comparison.

Figure 23.

Histogram compression, above axis: the spatial image in time 2 and below: the spatial image in time 1.

Experiment 4:

See Figures 2428.

Figure 24.

a: South Pole 1957, b: South Pole 2022.

Figure 25.

Comparative general map, the description of the components is as shown in Table 3.

Figure 26.

Comparison of histograms of two temporal spatial images.

Figure 27.

a: Bar histogram of spatial image in time 1, b: Bar histogram of spatial image in time 2, c: Histogram comparison.

Figure 28.

Histogram compression, above axis: the spatial image in time 2 and below: the spatial image in time 1.

Advertisement

4. Discussion

Spatial analysis in GIS knowledge is done for experts in this field through specialized software such as ArcGIS, etc., and the issue of recognition of environmental changes, etc. is of special importance to them. In the preparation of image data according to the rules of data mining, the only change in the captured images should be time and other variables of the desired location, including geographical characteristics, camera height from the ground, etc. must remain constant, and otherwise, a calculation error will occur. Therefore, by strictly observing this important point, the system error will be zero; in other words, we will have an accuracy of %100. Also, the test results will be the same no matter how many (repetitions) are performed, which explains the validation of the model.

Comparison segmentation method and proposed model, advantages and disadvantages:

Using segmentation in spatial images is important task for detecting changes. Segmentation must not allow regions of the image to overlap. Thresholding is one of the oldest methods used for image segmentation. It is based on the gray level intensity value of pixels. The histogram of an image consists of Conceptually is similar to the classifiers except that they are implemented in the spatial domain of an image rather than in a feature space. It treats the segmentation as a registration process. Some researchers used atlases not only to impose spatial constraints but also to provide probabilistic information about the tissue model. The advantage is that it can segment an image with no well-defined relation between regions and pixels. K-means is a clustering method that partitions the n-points into the k-clusters in which each pixel belongs to one cluster by minimizing an objective function in such a way that within a cluster sum of squares is get minimized. It starts with k-clusters and each pixel is assigned to one cluster. The limitation of the K-means algorithm is computational time increases on implementation in large amounts of data but our proposed model is independent of clustering so will be faster than the K-means algorithm.

Advertisement

5. Implication

In the proposed model, using machine vision, image data processing, fuzzy mathematical techniques, the use of known masks, as well as historical images in Google Earth software, we can measure changes in images and measure them. This software can be added to Google Earth software as a development part and users can easily view spatial changes in terms of time changes.

Advertisement

6. Conclusion

The change recognition model presented by the author of the article earlier [17], was also used in spatial change recognition. The most important difference between the previous models in change recognition in industrial parts with the current model is in the field of capturing images and image preparation. Here the images are taken from Google Earth and no filter is used to prepare them because the images are of sufficient quality and adding any filter will cause a computational error in the model (this was tested many times by the authors) but in the base model, the capture of images was with a local camera that had the error and the macrographic imaging technique was done [17].

References

  1. 1. Khurana M, Saxena V. Soft computing techniques for change detection in remotely sensed images: A review. International Journal of Computer Science Issues. 2015;12(2)
  2. 2. Bruzzone L, Bovolo F. A novel framework for the Design of Change-Detection Systems for very-high-resolution remote sensing images. IEEE. 2013;101(3)
  3. 3. Inzamul Haque M, Basak R. Land cover change detection using GIS and remote sensing techniques: A spatio-temporal study on Tanguar Haor, Sunamganj, Bangladesh. The Egyptian Journal of Remote Sensing and Space Sciences. 2017;20(2):251-263. DOI: 10.1016/j.ejrs.2016.12.003
  4. 4. Gomaa M, Hamza E, Elhifnawy H. Automated unsupervised change detection technique from RGB color image. Materials Science and Engineering. 2019;610:012046. DOI: 10.1088/1757-899X/610/1/012046
  5. 5. Fisher R. Change detection in color images. In: Proceedings of 7th IEEE Conference on Computer Vision and Pattern. Mathematics. Citeseer. 1999
  6. 6. Kadhum ZM, Jasim BS, Obaid MK. Change detection in city of Hilla during period of 2007-2015 using remote sensing techniques. Materials Science and Engineering. 2020;737:012228. DOI: 10.1088/1757-899X/737/1/012228
  7. 7. Wales N, Murphy RJ, Bruce E. Understanding patterns of vegetation change at the Angkor world heritage site by combining remote sensing results with local knowledge. International Journal of Remote Sensing. 2021;42(2). DOI: 10.1080/01431161.2020.1809739
  8. 8. Bouziani M, Goïta K, He D-C. Automatic change detection of buildings in urban environment from very high spatial resolution images using existing geodatabase and prior knowledge. ISPRS Journal of Photogrammetry and Remote Sensing. 2010;65:143-153. DOI: 10.1016/j.isprsjprs.2009.10.002
  9. 9. You Y, Cao J, Zhou W. A survey of change detection methods based on remote sensing images for multi-source and multi-objective scenarios. Remote Sensing. 2020;12(15):2460. DOI: 10.3390/rs12152460
  10. 10. Li Q, Wang S, Bai X, Luo G, Song X, Tian Y, et al. Change detection of soil formation rate in space and time based on multi source data and geospatial analysis techniques. Remote Sensing. 2020;12:121. DOI: 10.3390/rs12010121
  11. 11. Panuju DR, Paull DJ, Gri AL. Change detection techniques based on multispectral images for investigating land cover dynamics. Remote Sensing. 2020;12:1781. DOI: 10.3390/rs12111781
  12. 12. Asokan A, Anitha J. Change detection techniques for remote sensing applications: A survey. Earth Science Informatics. 2019;12:143-160. DOI: 10.1007/s12145-019-00380-5
  13. 13. Kirscht M, Rinke C. 3D reconstruction of buildings and vegetation from synthetic aperture radar (SAR) images. MVA. 1998
  14. 14. Lia Z, Jia Z, Liu L, Yang J, Kasabovc N. A method to improve the accuracy of SAR image change detection by using an image enhancement method. ISPRS Journal of Photogrammetry and Remote Sensing. 2020;163:137-151. ISSN: 0924-2716. DOI: 10.1016/j.isprsjprs.2020.03.002
  15. 15. Sun Y, Lei L, Li X, Sun H, Kuang G. Nonlocal patch similarity based heterogeneous remote sensing change detection. DOI: 10.1016/j.patcog.2020.107598
  16. 16. Ansari RA, Buddhiraju KM, Malhotra R. Urban change detection analysis utilizing multiresolution texture features from polarimetric SAR images. Remote Sensing Applications: Society and Environment. DOI: 10.1016/j.rsase.2020.100418
  17. 17. Mirbod M, Ghatari AR, Saati S, Shoar M. Industrial parts change recognition model using machine vision, image processing in the framework of industrial information integration. Journal of Industrial Information Integration. 2022;26:100277. DOI: 10.1016/j.jii.2021.100277. ISSN: 2452-414X
  18. 18. Kanopoulos N et al. Design of an Image Edge Detection Filter using the Sobel operator. Journal of Solid-State Circuits, IEEE. 1988;23(2):358-367
  19. 19. Seif A et al. A hardware architecture of Prewitt edge detection. In: Sustainable Utilization and Development in Engineering and Technology (STUDENT), 2010 IEEE Conference. Computer Science. Malaysia; 2010. pp. 99-101
  20. 20. Zhou W, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing. 2004;13(4):600-612
  21. 21. Xiong Z, Huang TS. The Essential Guide to Video Processing. Texas, USA: Department of Electrical and Computer Engineering The University of Texas at Astin; 2009
  22. 22. Xu L. Enterprise Integration and Information Architecture: A Systems Perspective on Industrial Information Integration. Auerbach Publications; 2014. p. 446. ISBN: 9781439850244

Notes

  • Synthetic-aperture radar (SAR): synthetic-aperture radar is a form of radar that is used to create two-dimensional images or three-dimensional reconstructions of objects, such as landscapes [13].
  • Structural Similarity Index measure.

Written By

Majid Mirbod

Submitted: 08 June 2022 Reviewed: 10 November 2022 Published: 19 December 2022