Open access peer-reviewed chapter

A Fuzzy Belief-Desire-Intention Model for Agent-Based Image Analysis

By Peter Hofmann

Submitted: October 28th 2016Reviewed: February 15th 2017Published: August 30th 2017

DOI: 10.5772/67899

Downloaded: 304

Abstract

Recent methods of image analysis in remote sensing lack a sufficient grade of robustness and transferability. Methods such as object-based image analysis (OBIA) achieve satisfying results on single images. However, the underlying rule sets for OBIA are usually too complex to be directly applied on a variety of image data without any adaptations or human interactions. Thus, recent research projects investigate the potential for integrating the agent-based paradigm with OBIA. Agent-based systems are highly adaptive and therefore robust, even under varying environmental conditions. In the context of image analysis, this means that even if the image data to be analyzed varies slightly (e.g., due to seasonal effects, different locations, atmospheric conditions, or even a slightly different sensor), agent-based methods allow to autonomously adapt existing analysis rules or segmentation results according to changing imaging situations. The basis for individual software agents’ behavior is a so-called believe-desire-intention (BDI) model. Basically, the BDI describes for each individual agent its goal(s), its assumed current situation, and some action rules potentially supporting each agent to achieve its goals. The chapter introduces a believe-desire-intention (BDI) model based on fuzzy rules in the context of agent-based image analysis, which extends the classic OBIA paradigm by the agent-based paradigm.

Keywords

  • agent-based image analysis
  • fuzzy believe-desire-intention model
  • object-based image analysis
  • fuzzy control system
  • remote sensing

1. Introduction

Analyzing remote sensing data is strongly bound to methods of image processing and image analysis. In contrast to other imaging techniques, remote sensing as per definition is a method to acquire information about the earth’s surface by detecting and analyzing its reflected or emitted electromagnetic radiation and without being in direct contact with it. Besides radiation of the visual spectrum also infrared (optical data) and microwave radiation (RADAR) is used to produce remote sensing images. The remote sensing instruments can be carried by space crafts (usually satellites) or airborne vehicles (airplanes, drones, etc.). In order to gather geo-information from remote sensing data, the produced images need to be analyzed, that is, preprocessed and classified. In this context, image classification means to assign pixels to meaningful object classes of the earth’s surface, whereas the delineated and classified objects are finally stored in a geographic information system (GIS) as polygons, lines, or points (vector model). With the continuous increase of remote sensing images’ spatial (and radiometric) resolution, image analysis in remote sensing became more and more complex. Until the late 1990s, the majority of remote sensing data was analyzed by means of classification methods taking into account the radiation stored in each single pixel. Meanwhile, rather sophisticated methods of pattern analysis, artificial intelligence, and computer vision are applied.

With the advent of very high resolution (VHR) satellite images, classic methods of image classification, as described above, failed since most of the objects of interest are represented in VHR data by numerous and spectrally inhomogeneous pixels. Moreover, properties such as shape, texture, and spatial context play a rather important role when identifying and delineating objects of interest in this kind of data [13]. Thus, more or less simultaneously with the advent of VHR satellite images, object-based image analysis (OBIA) has meanwhile evolved as a new and accepted paradigm for analyzing remote sensing data. In contrast to pixel-based analysis methods, OBIA deals with image objects as the building blocks for analysis. Image objects are initially generated by an arbitrary image segmentation followed by an initial classification of these image segments. The feature space for classification can be very high dimensional describing color, shape, texture, or the spatial context properties for the desired object classes. Numerous classifiers can be applied ranging from simple thresholding to Support Vector Machines (SVM), Bayesian Network Classifiers (BNC), and Artificial Neural Networks (ANN). Fuzzy set assignments are possible, too. For the latter the definition of fuzzy sets and the underlying fuzzy classification rules should reflect the ontology of the desired object classes [46]. Thus, the typical workflow of OBIA starts with an initial segmentation (and classification) as described above; followed by an iterative process of knowledge-based segmentation and classification improvement. The latter reflects the so-called task-ontology describing the necessary expert knowledge on image processing, which can be stored as an OBIA rule set and reapplied. However, the more precisely and reliebly remote sensing data has to be analyzed, the more complex are the methods and rule sets. The latter finally reduces the rule sets’ transferability. In order to achieve acceptable results for different image data more manual interaction such as changing single rules or manually correcting object borders and/or class assignments is necessary [7, 8]. Consequently, in order to benefit from OBIA’s advantages for numerous images or even whole image archives, intelligent, and flexible solutions are necessary, which are capable to autonomously adapt to image variability.

2. Agent-based and multiagent systems

Agent-based and multiagent systems (MAS) recently show a variety of applications: they range from simulation of complex systems such as social systems [9] and ecosystems [10, 11] to the automation and optimization of complex production systems such as industrial processes [1214]. In software development, meanwhile the agent-oriented paradigm has evolved as a new paradigm that extends the classic object-oriented approach. Simply spoken, the general differences are: (1) objects behave rather passive than agents, that is, objects only change once they receive an appropriate signal while agents behave proactive and collaborative and (2) agents can be mobile while objects are static. Thus, agents have (individual) goals they intend to achieve; they have sensors and effectors that enable them to become aware about their current status and to interact with their environment. Agents can decide autonomously about their potential next action. The environment agents are interacting with can be of arbitrary complexity ranging from other (human) agents to factory plants, sports fields to traffic situations, etc. When embedded in (collaborative) MAS individual agents often have different roles but common goals. All these abilities allow software agents and MAS to react flexible but robust on unforeseeable changes of their environment.

Since each agent needs to have certain situation awareness, each agent must be capable to appraise its current situation. That is, evaluating the grade of goal achievement and the acting opportunities supporting the agent’s goal achievement. This sort of situation awareness is commonly known as the belief-desire-intention (BDI) model [1519]. Simply spoken, the BDI model allows an agent to analyze its current situation and to choose from a predefined list of plans the most promising one in order to achieve its goals. It is obvious that for the design of software agents and MAS ontologies are required, which are capable to formally describe an agent’s or MAS’ environment and which allow individual agents to infer their current situation. Further, ontologies are necessary to describe an agent’s goals and to infer the most promising action for goal achievement [20]. Casali et al. [21] extend the classic BDI model to a graded BDI (g-BDI) model, which allows each agent to express its preferences among its acting opportunities, while Shen et al. [22] introduce an agent fuzzy decision making (AFDM) approach, which extends the classic BDI model “by making decisions based on quantified fuzzy judgment.” Zarandi and Ahmadpour [23] present a fuzzy agent-based expert system for the steel making process that uses a fuzzy described knowledge base.

2.1. Agent-based image analysis

Although meanwhile matured, the agent-based paradigm and potential applications based on it are not very widespread in the image analysis community, yet. The most applications can be found in the field of image coregistration and image fusion [2426]. In the field of object detection and delineation reported applications are still rare [2731]—even more in the field of remote sensing image analysis [3235].

2.2. Software agents and multiagent systems in OBIA

As already mentioned, in order to fully exploit the advantages of OBIA there is a strong need for more robustness and transferability of methods. The limiting factors are the rule sets’ complexity and the unpredictable variations of the image objects’ appearance in remote sensing data. At this background recently the integration of the agent-based paradigm with OBIA are investigated in order to improve its degree of automation, its robustness, and its transferability. MAS seem to have the potential to overcome OBIA’s obstacles [33, 3638]. Especially their ability to react flexibly on environmental perturbations, which is given in the remote sensing domain by varying illuminations, seasons, locations, sensors, and atmospheric conditions, are promising aspects to be investigated. Consequently, Hofmann et al. [38] developed a conceptual framework for agent-based image analysis (ABIA), which suggests two principle ways of integration: (1) adapting already existing OBIA rule sets(e.g., thresholds of single rules) by means of a MAS built by respective rule set adaptation agents (RSAAs) and (2) evolving OBIA image objects to image object agents (IOAs). In the first approach different RSAAs adapt a rule set’s rules in order to improve its classification results. As constraint, adaptations must not violate the underlying ontology of the original rule set. The latter is controlled by one or more control agents (CAs), which also give feedback whether a to-be-defined minimum classification quality has been achieved after rule set adaptation (Figure 1).

Figure 1.

Principle workflow for OBIA rule set adaptation by means of a MAS built by RSAAs (Source: [38]).

In the second approach after initial segmentation and (fuzzy) classification IOAs build a hierarchical network of IOAs. Each IOA intends to become a best possible member of its assigned class (goal, desire). To achieve this goal, every IOA can change its shape by a number of predefined methods (effectors, intention). Further, every IOA is aware about its topological situation and can communicate within other IOAs in the hierarchical net. The underlying ontology for this approach is given by the (fuzzy) class descriptions (Figure 2).

Figure 2.

Scheme of a MAS built up by hierarchically organized IOAs and CAs (Source: [38]).

The ABIA framework has been implemented in a typical environment for agent-based modeling (REPAST [39]) as well as in a typical OBIA environment (eCognition) [40], realizing the IOA approach. In both implementations the real world objects to be delineated and identified were described as fuzzy sets based on an appropriate ontology. As test scene a very high resolution digital orthophoto (0.08m) has been used together with an appropriate digital surface model (DSM) together with the calculated slope and curvature (slope of slope) per pixel (Figure 3).

Figure 3.

Used image and DSM data for first implementation of the ABIA framework (Source: [40]).

The rule set was intentionally designed to delineate buildings in that particular scene following the ontology as outlined in Figure 4.

Figure 4.

Ontology describing buildings and their appearance in the given data (Source: [38]).

However, if the rule set is applied without any further adaptations it creates a rather over-segmented image—which would be a typical OBIA use case after reapplying a given rule set on similar images. The BDI model to solve this problem has been implemented as a hybrid model. That is, the class definitions were implemented as fuzzy sets, whereas the next action’s decision rules were designed crisp. For the latter simply all three provided actions were virtually executed for each IOA. Every IOA then opted for that action that improved its class membership to “building” at best. In the example demonstrated the final result has been achieved after 17 iteration steps already (Figure 5).

Figure 5.

Segmentation and classification result before (left) and after (right) applying the agent-based optimization approach according to the ABIA framework. Numbers indicate the membership degree to “building” (Source: [38]).

3. A fuzzy believe-desire-intention model for agent-based image analysis

Based on the already achieved results with the relative simple crisp BDI model it has been investigated here whether the IOAs’ intentions could also be expressed in fuzzy manner and whether this is of advantage compared to the above described BDI model. For this purpose the existing rule set, which has been developed in a software environment dedicated for OBIA (here: the commercial software eCognition), has been extended by the necessary components. In a standard fuzzy control-system control is given implicitly through the membership functions, e.g., “the colder room temperature the more open heater’s valve.” In this particular case the used software only allows fuzzy sets (alias classes) to be described in fuzzy manner. Consequently, the agents’ acting opportunities had to be expressed as fuzzy sets whereas the membership degree to an “action-class” can be interpreted as the “intention-degree” or the willingness of an agent to perform that particular action. Another difference of the fuzzy BDI model developed here is that it only evaluates the current situation. That is, there is no virtual test for each potential action if and how it would improve an agent’s situation (here: its class membership).

3.1. Components of the fuzzy BDI model

In order to fuzzify each IOA’s believe its current status quo after segmentation and classification is analyzed. That is, each IOA had to be enabled not only to know its current class membership degree (degree of goal/desire achievement), but also for each classification rule the degree of its contribution to that particular classification result. The latter would allow each IOA to select an action that improves one of those conditions.

3.1.1. Fuzzy beliefs

For this purpose, the conditions that build a “building” as described in the ontology were separated into the four categories: color conditions, DSM conditions, and shape conditions. The class “roof”—which represents buildings—consequently was now described through these aggregated conditions whereas the property “Classification Value of …” expresses the membership degree μ for this particular class or the degree of fulfillment (DOF) of that particular condition. Similarly, the condition classes were further deconstructed to classes describing the DOF’s property conditions (feature-2-class) or operator conditions (operator-2-class). Table 1 shows the cascaded classification scheme. With this decomposition every IOA now can determine the grade each of the fuzzy classification rules or rule groups (color conditions, shape conditions, or context conditions) it contributes to its final class assignment result (= grade of goal achievement). In the example displayed in Figure 6 the red outlined IOA fulfills the criteria for “roof” only by 0.432, whereas the color conditions are fully fulfilled, the shape conditions are fulfilled by 0.555 and the DSM conditions are fulfilled by 0.432.The shape conditions are not fulfilled, because the IOA’s rectangular fit value of 0.76 leads to a grade of fulfillment for that condition of 0.555 only.

Table 1.

Cascaded fuzzy classification scheme for the class “roof” as described in Figure 4.

Figure 6.

Evaluation of classification conditions of an IOA.

Similarly, the IOA’s DSM conditions show a DOF of 0.432 because the mean difference to neighbors in the DSM is at 1.631 translating to a DOF for that feature of 0.432.

3.1.2. Fuzzy intentions

Based on these fuzzy beliefs an intended next action can be determined in fuzzy manner based on appropriate fuzzy decision rules—again expressed as fuzzy sets. In the example present the following possible actions were implemented: “grow,” “shrink,” “smooth,” “merge,” and “do nothing.” While the latter action is obvious and applies only if an IOA has achieved its goal already, the former actions point to procedures eCognition offers and can be easily exchanged or adapted if necessary. In this particular example the actions translate to:

  • “grow”: grow the IOA of concern by one pixel into neighbor IOAs, which are unclassified.

  • “shrink”: shrink the IOA of concern by one pixel.

  • “merge”: merge the IOA of concern with its unclassified neighbors.

  • “smooth”: perform a grow-and-shrink sequence by one pixel each starting with shrink.

The appropriate intentions are defined as the fuzzy sets: “want grow,” “want shrink,” “want smooth,” “want merge,” or “do nothing.” The degree or the intensity an agent wants to execute one of these actions can depend on the prior determined classification conditions or on the spatial situation in which IOA is embedded in, or a combination of both. Each of the action intensities is expressed gradually, that is, through an action-membership (Table 2). In the example given, an IOA wants to shrink if the situation is similar to that of “want grow;” it wants the more shrink the closer its size is at the upper bound of the area-rule for “building” (“Upper bound of Area”). An IOA wants to do nothing the more it fulfills already the “building” criteria. It wants the more grow, the lower its contrast and elevation difference is at its border. Its intention to merge increases, the lower its area is and simultaneously the more its DSM and color criteria are similar with those of its neighbors. Its intention to smooth its border increases the less the shape criteria for building are fulfilled.

Table 2.

Definition of intentions as fuzzy sets.

As displayed in Figure 7, the red outlined IOA prefers to grow. However, similar to the ambiguity of class assignments of objects, each IOA can have ambiguous intentions in terms of a favorite, a second favorite, etc. action. Further, as with fuzzy class assignments [41], a minimum intention value should be defined (sensibly not less than 0.5) below which an intended action must be seen as not clearly enough wanted and thus not further considered.

Figure 7.

Grades of intentions for actions an IOA wants to perform for its improvement.

In the example given in Figure 7 although the IOA’s most wanted action is to grow, this action is not the only one it intends to perform. Since the intention value for “want_grow” is not clearly around 1.0 the IOA seems to be not fully convinced about this action to achieve its goal. Merging (intention = 0.68) seems to be an option although it is second-best. In other words, the IOA could also be satisfied if it merges. The only thing that is clear, is, that the IOA does not want to shrink (intention = 0.0). And its willingness to smooth its border (intention = 0.0298) is even lower than that for doing nothing (intention = 0.356).

4. Conclusion

The amount of remote sensing data stored in archives is increasing continuously. At the background of an increasing demand for reliable, precise and timely geoinformation, searching these archives, and analyzing the image data stored in them urges for methods of reliable and automated image analysis. While OBIA is an accepted and highly accurate method for analyzing especially VHR remote sensing data, its robustness, and transferability, as well as degree of automation is still low. Major obstacles for automating OBIA and its methods are its sensibility against perturbations, that is, the images’ and objects variability. ABIA as an extension of the OBIA paradigm has the potential to overcome these obstacles, since it has the ability to react more flexible and robust on unforeseeable perturbations. Nevertheless, research in this field is still in its beginning.

The example demonstrated in this chapter is just one aspect of this wide research field. It has been demonstrated how a fuzzy BDI model can be implemented in standard OBIA software, which is capable to allow individual IOAs to control their improvement actions within a MAS. Further research needs to be done in learning mechanisms for individual agents as well as in improved individual decision rules. Last but not the least, the performance issues are still another aspect that needs to be investigated in that field.

5. Outlook

The implemented fuzzy BDI model acts as the basis for a negotiation model that can be applied on ABIA agents: based on their individual action priorities, IOAs can negotiate their common next action(s) and therefore optimize the overall classification result of multiple but different images of the same kind as it is typical in remote sensing. While the fuzzy BDI model has been implemented for IOAs, it is also applicable in principle for rule set adaptation agents (RSAAs).

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Peter Hofmann (August 30th 2017). A Fuzzy Belief-Desire-Intention Model for Agent-Based Image Analysis, Modern Fuzzy Control Systems and Its Applications, S. Ramakrishnan, IntechOpen, DOI: 10.5772/67899. Available from:

Embed this chapter on your site Copy to clipboard

<iframe src="http://www.intechopen.com/embed/modern-fuzzy-control-systems-and-its-applications/a-fuzzy-belief-desire-intention-model-for-agent-based-image-analysis" />

Embed this code snippet in the HTML of your website to show this chapter

chapter statistics

304total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

ANFIS Definition of Focal Length for Zoom Lens via Fuzzy Logic Functions

By Bahadır Ergün, Cumhur Sahin and Ugur Kaplan

Related Book

First chapter

A Real-Time Speech Enhancement Front-End for Multi-Talker Reverberated Scenarios

By Rudy Rotili, Emanuele Principi, Stefano Squartini and Francesco Piazza

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us