Abstract
A fundamental objective of remote sensing imagery is to spread out the knowledge about our environment and to facilitate the interpretation of different phenomena affecting the Earth’s surface. The main goal of this chapter is to understand and interpret possible changes in order to define subsequently strategies and adequate decision-making for a better soil management and protection. Consequently, the semantic interpretation of remote sensing data, which consists of extracting useful information from image date for attaching semantics to the observed phenomenon, allows easy understanding and interpretation of such occurring changes. However, performing change interpretation task is not only based on the perceptual information derived from data but also based on additional knowledge sources such as a prior and contextual. This knowledge needs to be encoded in an appropriate way for being used as a guide in the interpretation process. On the other hand, interpretation may take place at several levels of complexity from the simple recognition of objects on the analyzed scene to the inference of site conditions and to change interpretation. For each level, information elements such as data, information and knowledge need to be represented and characterized. This chapter highlights the importance of ontologies exploiting for encoding the domain knowledge and for using it as a guide in the semantic scene interpretation task.
Keywords
- data
- information
- knowledge
- remote sensing imagery
- contextual information
- semantic image interpretation
- change interpretation
- ontologies
1. Introduction
A fundamental objective of remote sensing is to spread out the knowledge about our environment and to facilitate the interpretation of different phenomena affecting the Earth’s surface. Indeed, satellite images make possible to observe much more phenomena related to the increase of information acquired from multiple sensors. For instance, these phenomena include climatic change, urbanization, deforestation, desertification, and so on. Remote sensing and SIG communities have a great interest on change analysis and interpretation. Therefore, tools and strategies have been maintained for studying and analyzing the Earth’s surface dynamics. The principal objective, here, is to understand and interpret changes that may occur allowing, thus, to define strategies and an adapted decision-making for a better soil management and protection. Change detection, in remote sensing, can be defined as the process of identifying differences in the state of an object or a phenomenon by observing it at different times [1]. Applications associated with change detection include monitoring the evolution of cultures and land use, spatial progression of vegetation, forest and urban monitoring, the analysis of the climate change impacts and other cumulative changes. Several change detection approaches have been proposed in remote sensing. The general objectives of most change detection approaches include identifying the geographic locations and type of changes, quantifying the changes and assessing the accuracy of change detection results [2].
The information levels about changes from the remote sensing imagery can be categorized as (1) change detection level that allows detecting simple binary change (i.e. change vs. non-change). This category includes techniques such as image differencing [3], image rationing [4] and change vector analysis (CVA) [5]. These techniques focus on changes localizing but do not provide any information about the change’s nature. (2) The second category (also called thematic level of change) allows the identification of the detailed change “from-to”. It includes techniques such as post-classification comparison [6] and classified objects change detection (COCD) [7]. For more details about change detection techniques from remotely sensed images, the reader can refer to the work of [2]. The authors have given an overview of different change detection approaches where a comparison between pixel-based change detection and object-based change detection has been presented. Pixel-based change detection methods exploit the spectral characteristics of an image pixel to detect and measure changes. Although these methods have been successfully implemented in many areas for changes detection using remote sensing data, an important limitation of these approaches is that they do not exploit the spatial context of real objects [2]. To overcome this limitation, object-based approaches have been developed. Object-based change detection approaches, as defined by [8], allow to identify differences in geographic objects at different moments by using object-based analysis. This later allows to obtain, from an object image, information such as shape, texture and spatial relationships allowing the exploitation of the spatial context [2]. Consequently, the inclusion of this contextual information allows to understand the semantics of objects [9].
Up to now, both change detection approaches (i.e. pixel-based and object-based methods) have been successful either for detecting simple binary change/non-change (i.e. answering “are there changes”?) or for detailing “from-to” change between different classes (i.e. what change?). However, at the two levels of change detection (change detection and identification), these approaches do not give any information about the cause of changes (i.e. why and how change?), and, therefore, give no hints on how to evaluate their signification for the decision-making task. Consequently, an interpretation change level is needed for generating a description of the character and causality of change. The change interpretation level, here, allows to extract information from data (images) about changes that may occur, that is, to answer the question “why and how a change has been produced?”. As any interpretation level, the change interpretation task is not only based on the perceptual information derived from data, but it requires other knowledge sources such as
Highlighting the role of the remote sensing imagery for change detection and interpretation, an appropriate semantic interpretation method is needed for change interpretation in satellite images. Such methods should take into account the description and the representation of different information elements at each interpretation level. This chapter focuses on semantic scene interpretation for change interpretation in satellite images. Semantic scene interpretation task is composed of different levels of abstraction. The objectives of this chapter are to describe the semantic scene interpretation strategy including the definition and the representation of different information elements composing that process. It is structured in four sections. Section 2 intends to define required fundamental elements for the interpretation task and presents the role of ontologies for the semantic image interpretation. Afterwards, a description of semantic interpretation methods is discussed. Section 3 reviews and classifies approaches for the semantic remote sensing image interpretation. Section 4 presents a proposed method for semantic change interpretation and describes its different components.
2. Semantic interpretation
Remote sensing utility comes not from the data itself but rather from the information that can be derived from this data [10]. For this reason, the interpretation and data transformation into usable information is an important step for the development of user’s applications. Interpretation plays an important role in the process of data analysis. It helps users to easily understand the extracted information from remote sensing data. Consequently, the interpretation of this data enables user to make policy and management decisions. To be understandable, data must be transformed into information, and then, into knowledge as shown in Figure 1 . In this section, we give a meaning to each information element, such as data, information and knowledge, and then we present the interpretation, in a general sense, and the existing interpretation methods.
2.1. Definitions and fundamental elements
There are many definitions and significations of informative elements such as data, information and knowledge. According to [11],
Information comes from different sources, namely data,
To resume, an information element (data, information or knowledge), is “an entity composed of a definition set and a content set linked by a functional relationship called informative relation, associated with internal and external context”.
Figure 2
shows the general structure of an information element. Lillesand et al. [17], suggest that: “
2.2. Semantic image interpretation
In remote sensing imagery, image interpretation consists of assigning geographic object types to image objects [18]. A geographic object, according to [19], is an object of a certain minimum size on or near the Earth’s surface (e.g. a forest, lake or mountain), whereas an image object is a discrete region of a digital image that is internally coherent and different from its surroundings [20, 18].
Since long time, images interpretation has been based on pixels’ classification methods [17]. Recently, Castilla and Hay [20] have developed a new approach enabling the image analysis and interpretation based on the image partitioning into objects. This approach, called GEOBIA, relies on geographic objects to image objects based on three different steps: the segmentation, extraction and classification [21, 22, 18]. The segmentation step delineates regions having common characteristics. According to [20, 18], this step is based on the hypothesis that partitioning an image into objects is related to the way humans conceptually organize the landscape to comprehend it. The extraction defines the characteristics of the objects, such as shape, texture or the spectral response (i.e. low-level features such as high values in defined spectral bands) [23]. The classification step assigns a category (i.e. a semantic meaning) to the segmented objects according to the attributes calculated in the extraction phase. This last step aims at enriching the objects of the image in order to assign them a significant semantics (i.e. high-level concepts such as vegetation). This process is performed through the analysis of segment attributes and the interrelationships among segments to identify their geographic labels [23]. Such a concept highlights the importance of contextual information in improving the classification [24]. These techniques have shown efficient results based on expert’s knowledge. However, expert’s knowledge is subjective and cannot be used directly by an automatic process [22]. Consequently, this knowledge limits the automation of the image interpretation procedure. According to [18], the issue of automatic image interpretation consists of developing target recognition algorithms to map geographic objects. At this level, the challenge consists of linking the symbolic semantic information (e.g. vegetation index value) with numerical low-level features (e.g. measured vegetation index value). However, matching the high-level knowledge with the low-level knowledge leads to so-called semantic gap problem. This problem is defined as the lack of coincidence of the information that can be extracted from the visual data and the interpretation that the same data have for a user in a specific situation [25].
Hudelot et al. [26] defined the semantic images interpretation as the process of extraction and inference of high-level knowledge from the observed image. According to [26], “
With recent advances in knowledge engineering, ontologies are increasingly used for the formalization of the knowledge of a given domain, in a coherent and consensual manner [27]. Indeed, ontologies are admitted as powerful conceptual tools for describing the knowledge of a domain in a structured and shared way and for the management of unstructured data, in particular in the domain of the semantic web. They provide a relevant methodological framework for the representation of
An ontology specifies a set of concepts, their characteristics, their instances and their relationships, and axioms that are relevant for modeling a domain of study and permits the inference of implicit knowledge. It separates the expert’s domain knowledge expressed by high-level concepts from the low-level features of image objects [18]. Generally, the association of these two levels can be performed using inference engines (i.e. reasoners) in the ontology. A reasoned is considered as a classification algorithm by remote sensing experts and based on logical rules (expressed in description logics), an automatic reasoner can infer new knowledge from explicit knowledge by ontologies and verify its logical consistency.
3. Semantic interpretation of remote sensing images
In a wide sense, ontologies, presented as explicit knowledge models, are widely used in images interpretation domain. First, particular contributions highlighted the use of ontologies in image processing domain have been presented in the multimedia field [26, 32–35]. Indeed, several multimedia ontologies have been presented and developed either for the description of the image low-level content, or, as standard annotation vocabularies for describing the image high-level content. Approaches proposed in [32–34] are examples of ontologies proposed in this context. Other approaches focusing on images processing or annotation problem have exploited ontologies as an annotation vocabulary to facilitate the mapping between perceptual primitives and high-level concepts. These approaches use a visual or object ontology at the intermediate level (i.e. in between low-level features and domain concepts [26]). According to Maillot [35], the visual concept ontology guides the domain knowledge acquisition process by providing a set of generic visual terms close to natural language and closer to images features. It respectively allows the reduction of the domain knowledge acquisition bottleneck and the semantic gap between domain concepts and low-level features. In Mezaris [36], an object ontology, which is a set of qualitative intermediate-level descriptors, has been proposed. It is used to allow the qualitative definition of the high-level concepts (that user query for) and their relations. Similarly, Hedelot et al. [26] have presented a solution to the symbol grounding problem. The symbol grounding problem refers to the mapping between low-level features (i.e. the numerical image data) and the high-level semantic concepts. The proposed work presents a learning approach for linking low-level features and visual concepts by using an intermediate processing ontology and
Ontologies have been widely exploited in the remote sensing domain, particularly, for the interpretation or the annotation of remote sensing imagery. Hence, several approaches have been proposed for geographical information analysis and management. The proposed approaches are distinguished according to their objective.
3.1. Ontology-based objects classification
As we have introduced, object-based classification consists of assigning a semantic class label (i.e. high-level concept) to a region (i.e. image object) in the image. In this context, Raskin and Pan [38] used ontology as a knowledge base allowing to classify orthogonal classes such as space, time, Earth realms, physical quantities and integrative science knowledge classes such as phenomena, events, and so on. In Ref. [39], authors have presented a semantic model for the classification of landforms, which are extracted from a digital elevation model using OBIA methods. Andrès et al. [22] have proved that expert’s knowledge explanation via ontologies can improve the automation of satellite images and then they have presented an ontological approach allowing to classify remote sensing images. Jess et al. [40] proposed an ontological framework for ocean satellite images classification, which depicts how a potential building of an ontology model for low and high level of features. Recently, Belgiu et al. [41] have presented a method that consists of coupling an ontology-based knowledge representation for objects classification with the OBIA framework. A very recent semantic object-based classification method (using ontology of high-resolution remote sensing imagery) has been presented in [42]. In this approach, authors started by ontology modeling, then, a classification part is performed based on data-driven machine learning, segmentation, feature selection, sample collection and on an initial classification. Finally, image objects are re-classified based on the ontological model.
3.2. Ontology-based objects recognition
Several studies are focused on the object recognition problem in satellite imagery. For instance, Durand et al. [43] have presented an ontology for the recognition of urban objects in satellite images. This ontology has been enriched later in [44] with other domain concepts and spatial relations and then has been used for the annotation and interpretation of the remote sensing image. In [45], Forestier et al. have developed an ontology for the identification of urban features in satellite images. The proposed method starts by associating a set of low-level characteristics to each image region by using a segmentation algorithm. Then, the knowledge base (i.e. the ontology) is used to assign a semantic to the considered region. This work has been extended and generalized in [46] by adding new knowledge functions (KFs) including spatial relations between objects. Therefore, the proposed approach has been applied for coastal objects recognition. Recently, Luo et al. [47] have presented an ontology-based framework that was used to model the land cover extraction knowledge and interpret high resolution satellite (HRS) images at the regional level. In this work, the land cover ontology structure is explicitly defined, representing the spectral, textural and shape features, and allowing for the automatic interpretation of the extracted results. Similarly, Gui et al. [48] have presented an ontological method for extracting individual buildings with different orientations and different structures from SAR images based on ontological semantic analysis.
3.3. Ontology-based change detection
Modeling different states of objects, or phenomenon, in time allows to detect and identify different changes that can undergo these objects and phenomena. However, few ontology-based approaches have been proposed for change detection in remote sensing domain. For instance, Hashimoto et al. [49] have presented a framework based on ontologies and heuristics for automatic change interpretation. The proposed framework considers remote sensing data analysis as a knowledge information processing, which derives new information about targets with inference from the observed data and
4. Proposed approach: multi-levels semantic images interpretation
4.1. Semantic scenes interpretation strategy
Semantic image interpretation is defined by the semantics extraction and inference processes of high knowledge from an observed image. Semantics extraction refers to the image interpretation from a human perspective. It consists of obtaining useful spatial and semantic information on the “basic informative granules” (i.e. pixels, objects, zones, global scene) using human knowledge and experience. Generally, existing approaches, for semantics image interpretation, follow a multi-level strategy for describing the image content. According to Marr’s vision [52], this architecture allows to separate the perceptual levels (i.e. syntactic description of the visual content of the image according to descriptors and visual primitives) and the conceptual or semantic levels (i.e. the meaning of the elements present in the image). Hudelot et al. [26] have adapted this architecture for semantic image interpretation in the medical domain. These authors suggested that the semantic level can be divided into three semantic abstraction sub-levels: semantic object level, semantic spatial level and semantic global level. Consequently, we have used this multi-level architecture for semantic scenes interpretation in remote sensing domain and subsequently for changes interpretation.
As shown in
Figure 3
, the proposed architecture is composed of different levels of abstraction. Consequently, following the idea that the “interpretation may take place at several levels of complexity, from the simple recognition of objects on scene to the inference of site conditions”, an interpretation task will be accorded to each level. For each level, the interpretation strategy depends on: the input data (i.e. definition set) (e.g. scene), the output goal (i.e. definition content) (e.g. semantic objects classification) and
A definition set giving the potential information input element (i.e. what the information refers to);
A content set encoding the possible knowledge produced by the information (e.g. measurements or estimations of physical parameters, decisions, hypothesis);
An input-output relationship representing the functional link model (e.g. mathematical, physical) that associates the input elements with the produced information contents;
An internal context gathering intrinsic characteristics, constraints, or controls about the information relation itself;
An external context containing data, information, or knowledge useful to the elaboration of the meaning or the interpretation of the information element.
Formally, an information element
In the basic information element structure, illustrated in Figure 2 , objects and contents represent entities linked through the informative relation. According to Bosse et al. [14], the nature of these entities may be either hard or soft. Hard means that these entities are quantitatively defined with numbers, individuals, and so on, an example of hard object is raws of pixels and an example of hard contents is features. Soft signifies that entities are qualitatively defined with words, opinions, predictions, and so on. For instance, a rule, defining an image vegetation segment, is a soft object, and the vegetation segment represents the soft content. However, soft entities require a context in which the qualitative descriptors are defined [14]. An informative relation may be impersonal (or hard) when it does not depend on external conditions to link objects to contents. However, it may also behave in a softer way by using cognitive factors such subjective judgements, opinions and perceptions (e.g. (J) representing human experts’ outputs). For example, if we consider the informative relation a sensor making an acquisition, setting parameters value of this sensor belongs to the internal context. However, the conditions why the sensor has been set up with these setting belong to the external context. These conditions include the context of observation. This later represents an important part of the perception process as it is all that an influence on the perception of an event and all that is needed to understand the observation. Therefore, different situations may be perceived, relying on the same set of sensory information items, if they are interpreted within the different contexts of observation. Thus, providing the external context for a specific domain and specific aim makes a system to be intelligent for interpreting a specific situation. Part of this context is the domain knowledge that every human uses to interpret and understand any perception. The exploitation of the ontology offers the best way for representing and reasoning about this knowledge. In the following section, we describe the information element of each semantic image interpretation level and we demonstrate the role of ontologies for representing and reasoning about both internal and external contexts.
4.2. Information element: pixel level
The semantic image interpretation strategy starts with a feature extraction step from the image where raw image data are “converted” into visual features (edges, segments, regions, intersections, etc.), which are supposed to correspond to meaningful parts of semantic objects. What is considered here is not information but a kind of abstract data (i.e. a set of pixels). This level of abstraction corresponds to the information element paradigm when associated with the basic objects that are observed. An informative relation, that is, feature extraction, is used to link objects (input set) to contents (output set). This informative relation embeds knowledge allowing to build this link. However, the informative relation extracting features from the image (i.e. the definition set) probably needs to know the resolution of the sensor producing the image pixels [14] and other knowledge such as segmentation and extraction algorithms and feature properties. This additional information belongs to the internal context of the information element. In addition, contextual information such as sensing conditions including the acquisition date, the sun elevation angle, atmospheric conditions and algorithms characterization (that have to be known “previously” in the feature extraction step). This information belongs to the external context of the information element. Figure 4 shows the information element structure of the pixel level (data).
4.3. Information element: visual primitives level
The visual primitives’ level aims to assign a semantic attribution (i.e. labeling) to the extracted image segments from the shapefile containing the information about image features. An informative relation that links the input set (i.e. shapefile) to the output set (i.e. labeled segments) and uses internal information including features proprieties and segments rules for reasoning about segments labels. The attribution of these labels is based on an external resource (i.e. external context) formulated as a visual primitive ontology. This ontology includes a general description of expert’s knowledge about geographical features representation. The concepts, associated to this structure, are derived from images concepts. Figure 5 illustrates the structure of the information element in the visual primitives’ level. At this level, it is worthwhile to notice that the information definition set (i.e. input definition set) is represented by a shapefile format and the information content set (i.e. output set) by a RDF file format.
4.4. Information element: object level
The semantic object level allows to attribute a hard classification to objects in scene. Indeed, the semantic objects interpretation consists of attributing hard classes such as forest, lake, urban and others to the labeled visual primitive extracted in the former step. This later, formulated as knowledge, has been extracted in the visual primitive level, and it is used as an input definition set of the information element in the semantic object level. What is considered, at this level, is not information or data, but a set of knowledge allowing to describe the semantics of objects in the image. The link between the symbolic description (i.e. input definition set) and the semantic content (i.e. output set) is performed through the classification reasoner representing the informative relation of the information element. This informative relation needs other knowledge in order to associate the semantic definition to different image contents. Such knowledge includes
4.5. Information element: semantic spatial level
In the semantic spatial interpretation level, the focus is made in order to give a visual description of the whole content of the image scene in a given time. In other words, the objective is to describe different objects, present in the scene, and the existing spatial relations that hold between them. Consequently, this allows to give a conceptual representation of semantic objects and their spatial relation allowing a semantic interpretation at the scene level. A spatial relation extraction reasoner (representing the informative relation of the information element defining the knowledge here) allows to make a link between the semantic objects hierarchy (i.e. input definition set) and the conceptual representation (i.e. output content set). To build this link, the informative relation needs to use some predefined constraints about spatial relationships. These constraints (or spatial rules) are part of the internal context allowing to define spatial relations that existing between different objects in the scene. Spatial relationships include neighborhood relations (such as externally connected (EC), disconnected (DC), and non-tangential proper part (NTPP)); directional relations describing relative orientations of objects (e.g. North and South); and distance relations (such as near and far relationships [53]).
All these spatial relations are formulated in a spatial relation ontology as presented in [26], and then, they are integrated as parts of an external context in the structure of information element. On the other hand, the informative relation requires the integration of the domain knowledge (i.e. domain ontology) with the spatial relation ontology for the global interpretation and understanding of the scene. Notice here, that this domain ontology is used as an external context of the information element in the semantic object level. Therefore, the integration of the spatial relation ontology with the domain ontology (as an external context) in this information element of the semantic spatial level illustrates the growing extent of J context as J level of abstraction increases ( Figure 7 ).
4.6. Information element: global semantic level
The global semantic level (or high semantic level) refers to the semantic scenes interpretation in the time. This allows to describe the semantic content in terms of objects and their relations of different images representing the same scene in the time. To reach this purpose, the global semantic scenes interpretation consists of integrating temporal relations that can hold between images’ objects. The obtained result, output content, is an ontological conceptualization representing different concepts in the images as well as their relationships, namely, semantic, spatial, temporal and filiation relations. Figure 8 shows the structure of the information element in this level. In this structure, the informative relation considers as an input set the semantic spatial representation of different scenes (e.g. two scenes here). In order to allow linking these representations and the output result (i.e. content set), the informative relation, that is, the temporal and filiation relations reasoner, uses temporal and filiations rules as the internal context integrates temporal and filiation relations ontologies. Temporal relations define possible relations that hold between two time intervals that can be used for reasoning about the temporal descriptions of events, actions, beliefs, intentions or causality. Generally, Allen’s Interval Algebra [54] is the most known and widely used model for topological temporal relations between objects in time. The importation of the SWLRTO ontology, as part of external context, offers the possibility to classify different time relations. This allows, for example, to define the rule for the temporal relation before as follows:
In addition, filiation relations are also of great importance for reasoning about relations between objects in times. Filiation relations have been introduced by Del Mondo [55] and include continuation and derivation relationships. Continuation occurs when an entity (real object) continues to exist from one time to the next with the same identity, and the derivation occurs when an entity creates some others with new identities [55]. Thus, these relations must be integrated to the context allowing, thus, the informative relation to link the input set to the output set.
4.7. Information element: change interpretation
The final objective of the semantic scenes interpretation is the interpretation of changes that may occur. The change interpretation process consists of the detection of the changes that can affect different states of objects and the relations between these changes. Changes can be classified into: (1) domain-independent occurrences (such as growth, shrinkage, disappearance, appearance, etc.) or (2) domain-dependent (or domain-specific) occurrences (such as deforestation, urbanization and desertification). Most researches, analyzing and studying changes, consider the first category as
5. Conclusions and discussion
Semantic images interpretation is an important step for any decision-making system. It allows to give a semantic description of the image content. Consequently, this allows an agent (user or machine) to take the best management decision of a given situation. The interpretation may take place at different levels of complexity, from the simple recognition of objects on scene to the inference of site conditions and also to change interpretation. In this chapter, we have mainly focused on the semantic scenes interpretation for change interpretation in remote sensing imagery. Therefore, we have demonstrated that the semantic scenes interpretation can be done in different level, from the low level to the high level. For each level, it is important to characterize the structure of the information element (i.e. data, information or knowledge) and its components (i.e. input set, output set, internal and external context) required for the interpretation process. Consequently, a semantic conceptualization based on ontological concepts for representing the components of the information elements and for the interpretation step has been illustrated in this chapter. Especially, the ontology exploitation has been applied to formulate the expert’s knowledge such as
Generally, the structure of the information element is composed of the definition set, content set, informative relation and both internal and external context. In addition, as we have shown in
Figure 2
, quality of information (
References
- 1.
Ashbindu S. Review article digital change detection techniques using remotely-sensed data. International journal of remote sensing. 1989; 10 (6):989-1003 - 2.
Masroor H et al. Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS. 2013; 80 :91-106 - 3.
Quarmby NA, Cushnie JL. Monitoring urban land cover changes at the urban fringe from SPOT HRV imagery in south-east England. International Journal of Remote Sensing. 1989; 10 (6):953-963 - 4.
Howarth PJ, Wickware GM. Procedures for change detection using Landsat digital data. International Journal of Remote Sensing. 1981; 2 (3):277-291 - 5.
Johnson RD, Kasischke ES. Change vector analysis: A technique for the multispectral monitoring of land cover and condition. International Journal of Remote Sensing. 1998; 19 (3):411-426 - 6.
Bouziani M et al. Automatic change detection of buildings in urban environment from very high spatial resolution images using existing geodatabase and prior knowledge. ISPRS. 2010; 65 (1):143-153 - 7.
Chen G et al. Object-based change detection. International Journal of Remote Sensing. 2012; 33 (14):4434-4457 - 8.
Chen J et al. Land-use/land-cover change detection using improved change-vector analysis. Photogrammetric Engineering & Remote Sensing. 2003; 69 (4):369-379 - 9.
Addink EV et al. Introduction to the GEOBIA 2010 special issue: From pixels to geographic objects in remote sensing image analysis. International Journal of Applied Earth Observations and Geoinformation. 2012; 15 :1-6 - 10.
Bhatta B. Analysis of data. In: Research Methods in Remote Sensing. Dordrecht: Springer Netherlands; 2013. pp. 61-75. DOI: 10.1007/978-94-007-6594-8_4 - 11.
Landry BC, Rush JE. Toward a theory of indexing—Ii. Journal of the Association for Information Science and Technology. 1970; 21 (5):358-367 - 12.
Blair IV. The malleability of automatic stereotypes and prejudice. Personality and Social Psychology Review. 2002; 6 (3):242-261 - 13.
Zins C. Conceptual approaches for defining data, information, and knowledge. Journal of the Association for Information Science and Technology. 2007; 58 (4):479-493 - 14.
Bosse E, Solaiman B. Information Fusion and Analytics for Big Data and IoT. Norwood, MA, USA: Artech House, Inc.; 2016 - 15.
Whitaker GD. An Overview of Information Fusion. Malvern, United Kingdom: Defence Evaluation and Research Agency; 2001 - 16.
Losee RM. A discipline independent definition of information. Journal of the American Society for Information Science. 1997 (1986-1998); 48 (3):254 - 17.
Lillesand T et al. Remote Sensing and Image Interpretation. John Wiley&Sons; 2014 - 18.
Arvor D et al. Advances in geographic object-based image analysis with ontologies: A review of main contributions and limitations from a remote sensing perspective. ISPRS Journal of Photogrammetry and Remote Sensing. 2013; 82 :125-137 - 19.
Smith B, Mark DM. Ontology and geographic kinds. In: Poiker TK, Chrisman N, editors. Proceeding from the 8th International Symposium on Spatial Data Handling. Vancouver, BC: IGU; 1998:308-320 - 20.
Castilla G, Hay GJ. Image objects and geographic objects. In: Object-Based Image Analysis. Berlin, Heidelberg: Springer; 2008:91-110 - 21.
Lang S. Object-based image analysis for remote sensing applications: modeling reality – dealing with complexity. In: Blaschke T, Lang S, Hay GJ, editors OBIA. Lecture Notes in Geoinformation and Cartography. Berlin, Heidelberg: Springer; 2008:3-27 - 22.
Andres S et al. Towards an ontological approach for classifying remote sensing images. In 8th International Conference on Signal Image Technology and Internet Based Systems (SITIS). IEEE; 2012 - 23.
Liu Y et al. A framework of region-based spatial relations for non-overlapping features and its application in object based image analysis. ISPRS Journal of Photogrammetry and Remote Sensing. 2008; 63 (4):461-475 - 24.
Blaschke T. What's wrong with pixels? Some recent developments interfacing remote sensing and GIS. GeoBIT/GIS. 2001; 6 :12-17 - 25.
Smeulders AWM et al. Content-based image retrieval at the end of the early years. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2000; 22 (12):1349-1380 - 26.
Hudelot C et al. Symbol grounding for semantic image interpretation: From image data to semantics. Tenth IEEE International Conference on Computer Vision Workshops, 2005. ICCVW'05. IEEE; 2005 - 27.
Kompatsiaris Y, Hobson P. Semantic Multimedia and Ontologies. London, UK: Springer Verlag Limited; 2008 - 28.
Kokar MM, Wang J. Using ontologies for recognition: An example. Proceedings of the Fifth International Conference on Information Fusion, 2002. Vol. 2. IEEE; 2002 - 29.
Gruber TR. A translation approach to portable ontology specifications. Knowledge Acquisition. 1993; 5 (2):199-220 - 30.
Borst WN. Construction of Engineering Ontologies for Knowledge Sharing and Reuse. Centre for Telematics and Information Technology (CTIT). The Netherlands: Universiteit Twente; 1997 - 31.
Studer R et al. Knowledge engineering: Principles and methods. Data & Knowledge Engineering. 1998; 25 (1-2):161-197 - 32.
Russell BC et al. LabelMe: A database and web-based tool for image annotation. International Journal of Computer Vision. 2008; 77 (1):157-173 - 33.
Deng J et al. Imagenet: A large-scale hierarchical image database. IEEE Conference on Computer Vision and Pattern Recognition, 2009 (CVPR 2009). IEEE; 2009 - 34.
Naphade M et al. Large-scale concept ontology for multimedia. IEEE Multimedia. 2006; 13 (3):86-91 - 35.
Maillot N et al. Ontology based object learning and recognition: Application to image retrieval. In: 16th IEEE International Conference on Tools with Artificial Intelligence 2004 (ICTAI 2004). IEEE; 2004 - 36.
Mezaris V et al. Region-based image retrieval using an object ontology and relevance feedback. Eurasip Journal on Applied Signal Processing. 2004; 2004 :886-901 - 37.
Town C. Ontological inference for image and video analysis. Machine Vision and Applications. 2006; 17 (2):94-115 - 38.
Raskin RG, Pan MJ. Knowledge representation in the semantic web for earth and environmental terminology (SWEET). Computers & Geosciences. 2005; 31 (9):1119-1125 - 39.
Eisank C et al. A generic procedure for semantics-oriented landform classification using object-based image analysis. Geomorphometry. 2011; 2011 :125-128 - 40.
Almendros-Jiménez et al. A framework for ocean satellite image classification based on ontologies. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2013; 6 (2):1048-1063 - 41.
Belgiu M et al. Coupling formalized knowledge bases with object-based image analysis. Remote Sensing Letters. 2014; 5 (6):530-538 - 42.
Gu H, et al. An object-based semantic classification method for high resolution remote sensing imagery using ontology. Remote Sensing. 2017; 9 (4):329 - 43.
Durand N et al. Ontology-based object recognition for remote sensing image interpretation. In: 19th IEEE International Conference on ICTAI 2007. Vol. 1. IEEE; 2007 - 44.
Messaoudi W et al. A new ontology for semantic annotation of remotely sensed images. 2014 1st International Conference on ATSIP. IEEE; 2014 - 45.
Forestier G et al. Knowledge-based region labeling for remote sensing image interpretation. Computers, Environment and Urban Systems. 2012; 36 (5):470-480 - 46.
Forestier G et al. Coastal image interpretation using background knowledge and semantics. Computers & Geosciences. 2013; 54 :88-96 - 47.
Luo H, et al. Land cover extraction from high resolution zy-3 satellite imagery using ontology-based method. ISPRS International Journal of Geo-Information. 2016; 5 (3):31 - 48.
Gui R et al. Individual building extraction from TerraSAR-X images based on ontological semantic analysis. Remote Sensing. 2016; 8 (9):708 - 49.
Hashimoto S et al. A framework of ontology-based knowledge information processing for change detection in remote sensing data. 2011 IEEE International Geoscience and Remote Sensing Symposium (IGARSS). IEEE; 2011 - 50.
Arenas H et al. LC3: A spatial-temporal data model to study qualified land cover changes. In: Land Use and Land Cover Semantics, Principles, Best Practices and Prospects, 〈hal-01086886〉. 2015. pp. 211-242 - 51.
Li W et al. An integrated software framework to support semantic modeling and reasoning of spatiotemporal change of geographical objects: A use case of land use and land cover change study. ISPRS International Journal of Geo-Information. 2016; 5 (10):179 - 52.
Marr D, Vision A. A Computational Investigation into the Human Representation and Processing of Visual Information. WH San Francisco: Freeman and Company. 1982. 1.2 - 53.
Randell DA et al. A spatial logic based on regions and connection. Knowledge Representation and Reasoning. 1992; 92 :165-176 - 54.
Allen JF. Maintaining knowledge about temporal intervals. Communications of the ACM. 1983; 26 (11):832-843 - 55.
Del Mondo G et al. Modeling consistency of spatio-temporal graphs. Data & Knowledge Engineering. 2013; 84 :59-80 - 56.
Grenon P, Smith B. SNAP and SPAN: Towards dynamic spatial ontology. Spatial Cognition and Computation. 2004; 4 (1):69-104