Open access peer-reviewed chapter

A Fuzzy Belief-Desire-Intention Model for Agent-Based Image Analysis

Written By

Peter Hofmann

Submitted: 28 October 2016 Reviewed: 15 February 2017 Published: 30 August 2017

DOI: 10.5772/67899

From the Edited Volume

Modern Fuzzy Control Systems and Its Applications

Edited by S. Ramakrishnan

Chapter metrics overview

1,528 Chapter Downloads

View Full Metrics

Abstract

Recent methods of image analysis in remote sensing lack a sufficient grade of robustness and transferability. Methods such as object-based image analysis (OBIA) achieve satisfying results on single images. However, the underlying rule sets for OBIA are usually too complex to be directly applied on a variety of image data without any adaptations or human interactions. Thus, recent research projects investigate the potential for integrating the agent-based paradigm with OBIA. Agent-based systems are highly adaptive and therefore robust, even under varying environmental conditions. In the context of image analysis, this means that even if the image data to be analyzed varies slightly (e.g., due to seasonal effects, different locations, atmospheric conditions, or even a slightly different sensor), agent-based methods allow to autonomously adapt existing analysis rules or segmentation results according to changing imaging situations. The basis for individual software agents’ behavior is a so-called believe-desire-intention (BDI) model. Basically, the BDI describes for each individual agent its goal(s), its assumed current situation, and some action rules potentially supporting each agent to achieve its goals. The chapter introduces a believe-desire-intention (BDI) model based on fuzzy rules in the context of agent-based image analysis, which extends the classic OBIA paradigm by the agent-based paradigm.

Keywords

  • agent-based image analysis
  • fuzzy believe-desire-intention model
  • object-based image analysis
  • fuzzy control system
  • remote sensing

1. Introduction

Analyzing remote sensing data is strongly bound to methods of image processing and image analysis. In contrast to other imaging techniques, remote sensing as per definition is a method to acquire information about the earth’s surface by detecting and analyzing its reflected or emitted electromagnetic radiation and without being in direct contact with it. Besides radiation of the visual spectrum also infrared (optical data) and microwave radiation (RADAR) is used to produce remote sensing images. The remote sensing instruments can be carried by space crafts (usually satellites) or airborne vehicles (airplanes, drones, etc.). In order to gather geo-information from remote sensing data, the produced images need to be analyzed, that is, preprocessed and classified. In this context, image classification means to assign pixels to meaningful object classes of the earth’s surface, whereas the delineated and classified objects are finally stored in a geographic information system (GIS) as polygons, lines, or points (vector model). With the continuous increase of remote sensing images’ spatial (and radiometric) resolution, image analysis in remote sensing became more and more complex. Until the late 1990s, the majority of remote sensing data was analyzed by means of classification methods taking into account the radiation stored in each single pixel. Meanwhile, rather sophisticated methods of pattern analysis, artificial intelligence, and computer vision are applied.

With the advent of very high resolution (VHR) satellite images, classic methods of image classification, as described above, failed since most of the objects of interest are represented in VHR data by numerous and spectrally inhomogeneous pixels. Moreover, properties such as shape, texture, and spatial context play a rather important role when identifying and delineating objects of interest in this kind of data [13]. Thus, more or less simultaneously with the advent of VHR satellite images, object-based image analysis (OBIA) has meanwhile evolved as a new and accepted paradigm for analyzing remote sensing data. In contrast to pixel-based analysis methods, OBIA deals with image objects as the building blocks for analysis. Image objects are initially generated by an arbitrary image segmentation followed by an initial classification of these image segments. The feature space for classification can be very high dimensional describing color, shape, texture, or the spatial context properties for the desired object classes. Numerous classifiers can be applied ranging from simple thresholding to Support Vector Machines (SVM), Bayesian Network Classifiers (BNC), and Artificial Neural Networks (ANN). Fuzzy set assignments are possible, too. For the latter the definition of fuzzy sets and the underlying fuzzy classification rules should reflect the ontology of the desired object classes [46]. Thus, the typical workflow of OBIA starts with an initial segmentation (and classification) as described above; followed by an iterative process of knowledge-based segmentation and classification improvement. The latter reflects the so-called task-ontology describing the necessary expert knowledge on image processing, which can be stored as an OBIA rule set and reapplied. However, the more precisely and reliebly remote sensing data has to be analyzed, the more complex are the methods and rule sets. The latter finally reduces the rule sets’ transferability. In order to achieve acceptable results for different image data more manual interaction such as changing single rules or manually correcting object borders and/or class assignments is necessary [7, 8]. Consequently, in order to benefit from OBIA’s advantages for numerous images or even whole image archives, intelligent, and flexible solutions are necessary, which are capable to autonomously adapt to image variability.

Advertisement

2. Agent-based and multiagent systems

Agent-based and multiagent systems (MAS) recently show a variety of applications: they range from simulation of complex systems such as social systems [9] and ecosystems [10, 11] to the automation and optimization of complex production systems such as industrial processes [1214]. In software development, meanwhile the agent-oriented paradigm has evolved as a new paradigm that extends the classic object-oriented approach. Simply spoken, the general differences are: (1) objects behave rather passive than agents, that is, objects only change once they receive an appropriate signal while agents behave proactive and collaborative and (2) agents can be mobile while objects are static. Thus, agents have (individual) goals they intend to achieve; they have sensors and effectors that enable them to become aware about their current status and to interact with their environment. Agents can decide autonomously about their potential next action. The environment agents are interacting with can be of arbitrary complexity ranging from other (human) agents to factory plants, sports fields to traffic situations, etc. When embedded in (collaborative) MAS individual agents often have different roles but common goals. All these abilities allow software agents and MAS to react flexible but robust on unforeseeable changes of their environment.

Since each agent needs to have certain situation awareness, each agent must be capable to appraise its current situation. That is, evaluating the grade of goal achievement and the acting opportunities supporting the agent’s goal achievement. This sort of situation awareness is commonly known as the belief-desire-intention (BDI) model [1519]. Simply spoken, the BDI model allows an agent to analyze its current situation and to choose from a predefined list of plans the most promising one in order to achieve its goals. It is obvious that for the design of software agents and MAS ontologies are required, which are capable to formally describe an agent’s or MAS’ environment and which allow individual agents to infer their current situation. Further, ontologies are necessary to describe an agent’s goals and to infer the most promising action for goal achievement [20]. Casali et al. [21] extend the classic BDI model to a graded BDI (g-BDI) model, which allows each agent to express its preferences among its acting opportunities, while Shen et al. [22] introduce an agent fuzzy decision making (AFDM) approach, which extends the classic BDI model “by making decisions based on quantified fuzzy judgment.” Zarandi and Ahmadpour [23] present a fuzzy agent-based expert system for the steel making process that uses a fuzzy described knowledge base.

2.1. Agent-based image analysis

Although meanwhile matured, the agent-based paradigm and potential applications based on it are not very widespread in the image analysis community, yet. The most applications can be found in the field of image coregistration and image fusion [2426]. In the field of object detection and delineation reported applications are still rare [2731]—even more in the field of remote sensing image analysis [3235].

2.2. Software agents and multiagent systems in OBIA

As already mentioned, in order to fully exploit the advantages of OBIA there is a strong need for more robustness and transferability of methods. The limiting factors are the rule sets’ complexity and the unpredictable variations of the image objects’ appearance in remote sensing data. At this background recently the integration of the agent-based paradigm with OBIA are investigated in order to improve its degree of automation, its robustness, and its transferability. MAS seem to have the potential to overcome OBIA’s obstacles [33, 3638]. Especially their ability to react flexibly on environmental perturbations, which is given in the remote sensing domain by varying illuminations, seasons, locations, sensors, and atmospheric conditions, are promising aspects to be investigated. Consequently, Hofmann et al. [38] developed a conceptual framework for agent-based image analysis (ABIA), which suggests two principle ways of integration: (1) adapting already existing OBIA rule sets(e.g., thresholds of single rules) by means of a MAS built by respective rule set adaptation agents (RSAAs) and (2) evolving OBIA image objects to image object agents (IOAs). In the first approach different RSAAs adapt a rule set’s rules in order to improve its classification results. As constraint, adaptations must not violate the underlying ontology of the original rule set. The latter is controlled by one or more control agents (CAs), which also give feedback whether a to-be-defined minimum classification quality has been achieved after rule set adaptation (Figure 1).

Figure 1.

Principle workflow for OBIA rule set adaptation by means of a MAS built by RSAAs (Source: [38]).

In the second approach after initial segmentation and (fuzzy) classification IOAs build a hierarchical network of IOAs. Each IOA intends to become a best possible member of its assigned class (goal, desire). To achieve this goal, every IOA can change its shape by a number of predefined methods (effectors, intention). Further, every IOA is aware about its topological situation and can communicate within other IOAs in the hierarchical net. The underlying ontology for this approach is given by the (fuzzy) class descriptions (Figure 2).

Figure 2.

Scheme of a MAS built up by hierarchically organized IOAs and CAs (Source: [38]).

The ABIA framework has been implemented in a typical environment for agent-based modeling (REPAST [39]) as well as in a typical OBIA environment (eCognition) [40], realizing the IOA approach. In both implementations the real world objects to be delineated and identified were described as fuzzy sets based on an appropriate ontology. As test scene a very high resolution digital orthophoto (0.08m) has been used together with an appropriate digital surface model (DSM) together with the calculated slope and curvature (slope of slope) per pixel (Figure 3).

Figure 3.

Used image and DSM data for first implementation of the ABIA framework (Source: [40]).

The rule set was intentionally designed to delineate buildings in that particular scene following the ontology as outlined in Figure 4.

Figure 4.

Ontology describing buildings and their appearance in the given data (Source: [38]).

However, if the rule set is applied without any further adaptations it creates a rather over-segmented image—which would be a typical OBIA use case after reapplying a given rule set on similar images. The BDI model to solve this problem has been implemented as a hybrid model. That is, the class definitions were implemented as fuzzy sets, whereas the next action’s decision rules were designed crisp. For the latter simply all three provided actions were virtually executed for each IOA. Every IOA then opted for that action that improved its class membership to “building” at best. In the example demonstrated the final result has been achieved after 17 iteration steps already (Figure 5).

Figure 5.

Segmentation and classification result before (left) and after (right) applying the agent-based optimization approach according to the ABIA framework. Numbers indicate the membership degree to “building” (Source: [38]).

Advertisement

3. A fuzzy believe-desire-intention model for agent-based image analysis

Based on the already achieved results with the relative simple crisp BDI model it has been investigated here whether the IOAs’ intentions could also be expressed in fuzzy manner and whether this is of advantage compared to the above described BDI model. For this purpose the existing rule set, which has been developed in a software environment dedicated for OBIA (here: the commercial software eCognition), has been extended by the necessary components. In a standard fuzzy control-system control is given implicitly through the membership functions, e.g., “the colder room temperature the more open heater’s valve.” In this particular case the used software only allows fuzzy sets (alias classes) to be described in fuzzy manner. Consequently, the agents’ acting opportunities had to be expressed as fuzzy sets whereas the membership degree to an “action-class” can be interpreted as the “intention-degree” or the willingness of an agent to perform that particular action. Another difference of the fuzzy BDI model developed here is that it only evaluates the current situation. That is, there is no virtual test for each potential action if and how it would improve an agent’s situation (here: its class membership).

3.1. Components of the fuzzy BDI model

In order to fuzzify each IOA’s believe its current status quo after segmentation and classification is analyzed. That is, each IOA had to be enabled not only to know its current class membership degree (degree of goal/desire achievement), but also for each classification rule the degree of its contribution to that particular classification result. The latter would allow each IOA to select an action that improves one of those conditions.

3.1.1. Fuzzy beliefs

For this purpose, the conditions that build a “building” as described in the ontology were separated into the four categories: color conditions, DSM conditions, and shape conditions. The class “roof”—which represents buildings—consequently was now described through these aggregated conditions whereas the property “Classification Value of …” expresses the membership degree μ for this particular class or the degree of fulfillment (DOF) of that particular condition. Similarly, the condition classes were further deconstructed to classes describing the DOF’s property conditions (feature-2-class) or operator conditions (operator-2-class). Table 1 shows the cascaded classification scheme. With this decomposition every IOA now can determine the grade each of the fuzzy classification rules or rule groups (color conditions, shape conditions, or context conditions) it contributes to its final class assignment result (= grade of goal achievement). In the example displayed in Figure 6 the red outlined IOA fulfills the criteria for “roof” only by 0.432, whereas the color conditions are fully fulfilled, the shape conditions are fulfilled by 0.555 and the DSM conditions are fulfilled by 0.432.The shape conditions are not fulfilled, because the IOA’s rectangular fit value of 0.76 leads to a grade of fulfillment for that condition of 0.555 only.

Table 1.

Cascaded fuzzy classification scheme for the class “roof” as described in Figure 4.

Figure 6.

Evaluation of classification conditions of an IOA.

Similarly, the IOA’s DSM conditions show a DOF of 0.432 because the mean difference to neighbors in the DSM is at 1.631 translating to a DOF for that feature of 0.432.

3.1.2. Fuzzy intentions

Based on these fuzzy beliefs an intended next action can be determined in fuzzy manner based on appropriate fuzzy decision rules—again expressed as fuzzy sets. In the example present the following possible actions were implemented: “grow,” “shrink,” “smooth,” “merge,” and “do nothing.” While the latter action is obvious and applies only if an IOA has achieved its goal already, the former actions point to procedures eCognition offers and can be easily exchanged or adapted if necessary. In this particular example the actions translate to:

  • “grow”: grow the IOA of concern by one pixel into neighbor IOAs, which are unclassified.

  • “shrink”: shrink the IOA of concern by one pixel.

  • “merge”: merge the IOA of concern with its unclassified neighbors.

  • “smooth”: perform a grow-and-shrink sequence by one pixel each starting with shrink.

The appropriate intentions are defined as the fuzzy sets: “want grow,” “want shrink,” “want smooth,” “want merge,” or “do nothing.” The degree or the intensity an agent wants to execute one of these actions can depend on the prior determined classification conditions or on the spatial situation in which IOA is embedded in, or a combination of both. Each of the action intensities is expressed gradually, that is, through an action-membership (Table 2). In the example given, an IOA wants to shrink if the situation is similar to that of “want grow;” it wants the more shrink the closer its size is at the upper bound of the area-rule for “building” (“Upper bound of Area”). An IOA wants to do nothing the more it fulfills already the “building” criteria. It wants the more grow, the lower its contrast and elevation difference is at its border. Its intention to merge increases, the lower its area is and simultaneously the more its DSM and color criteria are similar with those of its neighbors. Its intention to smooth its border increases the less the shape criteria for building are fulfilled.

Table 2.

Definition of intentions as fuzzy sets.

As displayed in Figure 7, the red outlined IOA prefers to grow. However, similar to the ambiguity of class assignments of objects, each IOA can have ambiguous intentions in terms of a favorite, a second favorite, etc. action. Further, as with fuzzy class assignments [41], a minimum intention value should be defined (sensibly not less than 0.5) below which an intended action must be seen as not clearly enough wanted and thus not further considered.

Figure 7.

Grades of intentions for actions an IOA wants to perform for its improvement.

In the example given in Figure 7 although the IOA’s most wanted action is to grow, this action is not the only one it intends to perform. Since the intention value for “want_grow” is not clearly around 1.0 the IOA seems to be not fully convinced about this action to achieve its goal. Merging (intention = 0.68) seems to be an option although it is second-best. In other words, the IOA could also be satisfied if it merges. The only thing that is clear, is, that the IOA does not want to shrink (intention = 0.0). And its willingness to smooth its border (intention = 0.0298) is even lower than that for doing nothing (intention = 0.356).

Advertisement

4. Conclusion

The amount of remote sensing data stored in archives is increasing continuously. At the background of an increasing demand for reliable, precise and timely geoinformation, searching these archives, and analyzing the image data stored in them urges for methods of reliable and automated image analysis. While OBIA is an accepted and highly accurate method for analyzing especially VHR remote sensing data, its robustness, and transferability, as well as degree of automation is still low. Major obstacles for automating OBIA and its methods are its sensibility against perturbations, that is, the images’ and objects variability. ABIA as an extension of the OBIA paradigm has the potential to overcome these obstacles, since it has the ability to react more flexible and robust on unforeseeable perturbations. Nevertheless, research in this field is still in its beginning.

The example demonstrated in this chapter is just one aspect of this wide research field. It has been demonstrated how a fuzzy BDI model can be implemented in standard OBIA software, which is capable to allow individual IOAs to control their improvement actions within a MAS. Further research needs to be done in learning mechanisms for individual agents as well as in improved individual decision rules. Last but not the least, the performance issues are still another aspect that needs to be investigated in that field.

Advertisement

5. Outlook

The implemented fuzzy BDI model acts as the basis for a negotiation model that can be applied on ABIA agents: based on their individual action priorities, IOAs can negotiate their common next action(s) and therefore optimize the overall classification result of multiple but different images of the same kind as it is typical in remote sensing. While the fuzzy BDI model has been implemented for IOAs, it is also applicable in principle for rule set adaptation agents (RSAAs).

References

  1. 1. U. C. Benz, P. Hofmann, G. Willhauck, I. Lingenfelder, and M. Heynen, “Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 58, pp. 239-258, 2004.
  2. 2. T. Blaschke, M. Kelly, P. Hofmann, et al, “Geographic object-based image analysis – towards a new paradigm,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 87, pp. 180-191, 2014.
  3. 3. T. Blaschke and J. Strobl, “What’s wrong with pixels? Some recent developments interfacing remote sensing and GIS,” GeoBIT/GIS, vol. 6, pp. 12-17, 2001.
  4. 4. P. Hofmann, “Detecting informal settlements from IKONOS image data using methods of object oriented image analysis-an example from Cape Town (South Africa).” In: Jürgens, C. (Ed.): Proceedings of the 2nd International Symposium on Remote Sensing of Urban Areas, Regensburg, Germany, June 22-23 2001, pp. 41-42, 2001.
  5. 5. M. Belgiu, B. Hofer, and P. Hofmann, “Coupling formalized knowledge bases with object-based image analysis,” Remote Sensing Letters, vol. 5, pp. 530-538, 2014.
  6. 6. M. Belgiu and J. Thomas, “Ontology based interpretation of very high resolution imageries–grounding ontologies on visual interpretation keys,” AGILE 2013—Leuven, pp. 14-17, 2013.
  7. 7. P. Hofmann, “Übertragbarkeit von Methoden und Verfahren in der objektorientierten Bildanalyse-das Beispiel informelle Siedlungen,” ed: Dissertation, University of Salzburg, 2005.
  8. 8. P. Hofmann, T. Blaschke, and J. Strobl, “Quantifying the robustness of fuzzy rule sets in object-based image analysis,” International Journal of Remote Sensing, vol. 32, pp. 7359-7381, 2011.
  9. 9. A. Koch and D. Carson, “Spatial, temporal and social scaling in sparsely populated areas–geospatial mapping and simulation techniques to investigate social diversity,” in Proceedings of the Geoinformatics Forum Salzburg, Offenbach, 2012, pp. 44-53.
  10. 10. H. R. Gimblett, Integrating geographic information systems and agent-based modeling techniques for simulating social and ecological processes: Oxford University Press, Oxford, 2002.
  11. 11. P. Göhner, Agentensysteme in der Automatisierungstechnik: Springer-Verlag, Heidelberg, 2013.
  12. 12. W. Shen, D. H. Norrie, and J.-P. Barthès, Multi-agent systems for concurrent intelligent design and manufacturing: Taylor & Francis, London, New York, 2005.
  13. 13. M. Wooldridge, An introduction to multiagent systems: 2nd Ed., John Wiley & Sons, Chichester, U.K., 2009.
  14. 14. M. Wooldridge, An introduction to multiagent systems: John Wiley & Sons, 2009.
  15. 15. M. Wooldridge and N. R. Jennings, “Intelligent agents: theory and practice,” The knowledge Engineering Review, vol. 10, pp. 115-152, 1995.
  16. 16. M. Wooldridge, “Agent-based software engineering,” IEEE Proceedings-software, vol. 144, pp. 26-37, 1997.
  17. 17. M. Georgeff, B. Pell, M. Pollack, M. Tambe, and M. Wooldridge, “The belief-desire-intention model of agency,” in International Workshop on Agent Theories, Architectures, and Languages, 1998, pp. 1-10.
  18. 18. A. S. Rao and M. P. Georgeff, “BDI agents: From theory to practice”. In: Proceedings of the First International Conference on Multiagent Systems, June 12-14, 1995, San Francisco, California, USA. The MIT Press 1995 pp. 312-319.
  19. 19. M. Bratman, Intention, plans, and practical reason, Center for the Study of Language and Information. 1987.
  20. 20. K. Sycara and M. Paolucci, Ontologies in agent architectures, In: S. Staab, R. Studer (Eds.): Handbook on Ontologies, Springer, Berlin, Heidelberg, 2004, pp. 343-363.
  21. 21. A. Casali, L. Godo, and C. Sierra, “A graded BDI agent model to represent and reason about preferences,” Artificial Intelligence, vol. 175, pp. 1468-1478, 2011.
  22. 22. S. Shen, G. M. O'Hare, and R. Collier, “Decision-making of BDI agents, a fuzzy approach,” in Computer and Information Technology, 2004. CIT'04. The Fourth International Conference on, September 14-16, 2004, pp. 1022-1027.
  23. 23. M. H. F. Zarandi and P. Ahmadpour, “Fuzzy agent-based expert system for steel making process,” Expert Systems with Applications, vol. 36, pp. 9539-9547, 2009.
  24. 24. J. Beyerer, M. Heizmann, J. Sander, and I. Gheta, “Bayesian methods for image fusion,” In: T. Stathaki (Ed.): Image Fusion: Algorithms and Applications, Elsevier, Amsterdam, pp. 157-192, 2008.
  25. 25. C. Elmas and Y. Sönmez, “A data fusion framework with novel hybrid algorithm for multi-agent Decision Support System for Forest Fire,” Expert Systems with Applications, vol. 38, pp. 9225-9236, 2011.
  26. 26. F. Castanedo, J. García, M. A. Patricio, and J. M. Molina, “A multi-agent architecture based on the BDI model for data fusion in visual sensor networks,” Journal of Intelligent & Robotic Systems, vol. 62, pp. 299-328, 2011.
  27. 27. E. G. P. Bovenkamp, J. Dijkstra, J. G. Bosch, and J. H. C. Reiber, “Multi-agent segmentation of IVUS images,” Pattern Recognition, vol. 37, pp. 647-663, 2004.
  28. 28. N. Richard, M. Dojat, and C. Garbay, “Automated segmentation of human brain MR images using a multi-agent approach,” Artificial Intelligence in Medicine, vol. 30, pp. 153-176, 2004.
  29. 29. V. Rodin, A. Benzinou, A. Guillaud, P. Ballet, F. Harrouet, J. Tisseau, et al., “An immune oriented multi-agent system for biological image processing,” Pattern Recognition, vol. 37, pp. 631-645, 2004.
  30. 30. K. E. Melkemi, M. Batouche, and S. Foufou, “A multiagent system approach for image segmentation using genetic algorithms and extremal optimization heuristics,” Pattern Recognition Letters, vol. 27, pp. 1230-1238, 2006.
  31. 31. J. Nolan, A. Sood, and R. Simon, “Agent-based, collaborative image processing in a distributed environment”. In Proceedings of the 5th International Conference on Autonomous Agents (ACM Agents 2001), Montreal, QC, Canada, May 28 - June 01,2001, pp. 228-235.
  32. 32. F. Samadzadegan, F. T. Mahmoudi, and T. Schenk, “An agent-based method for automatic building recognition from lidar data,” Canadian Journal of Remote Sensing, vol. 36, pp. 211-223, 2010.
  33. 33. K. Borna, A. Moore, and P. Sirguey, “A vector agent approach to extract the boundaries of real-world phenomena from satellite images,” in Proceedings of Research at Locate'14 (R@Loc-2014), Canberra, Australia, 2014.
  34. 34. K. Borna, A. Moore, and P. Sirguey, “Towards a vector agent modelling approach for remote sensing image classification,” Journal of Spatial Science, vol. 59, pp. 283-296, 2014.
  35. 35. F. Tabib Mahmoudi, F. Samadzadegan, and P. Reinartz, “Object oriented image analysis based on multi-agent recognition system,” Computers & Geosciences, vol. 54, pp. 219-230, 2013.
  36. 36. J. Liu and Y. Y. Tang, “Adaptive image segmentation with distributed behavior-based agents,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, pp. 544-551, 1999.
  37. 37. R. D. Labati, V. Piuri, and F. Scotti, “Agent-based image iris segmentation and multiple views boundary refining.” In: Proceedings IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems 28-30 Sept., Washington DC, 2009, pp. 1-7.
  38. 38. P. Hofmann, P. Lettmayer, T. Blaschke, M. Belgiu, S. Wegenkittl, R. Graf, et al., “Towards a framework for agent-based image analysis of remote-sensing data,” International Journal of Image and Data Fusion, vol. 6, pp. 115-137, 2015.
  39. 39. C. M. Macal and M. J. North, “Tutorial on agent-based modelling and simulation,” Journal of Simulation, vol. 4, pp. 151-162, 2010.
  40. 40. P. Hofmann, V. Andrejchenko, P. Lettmayer, M. Schmitzberger, M. Gruber, I. Ozan, et al., “Agent based image analysis (ABIA) - preliminary research results from an implemented framework.” In: GEOBIA 2016 : Solutions and Synergies., 14 September 2016 - 16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC), 2016.
  41. 41. P. Hofmann, “Defuzzification strategies for fuzzy classifications of remote sensing data,” Remote Sensing, vol. 8(6), doi:10.3390/rs8060467, 2016.

Written By

Peter Hofmann

Submitted: 28 October 2016 Reviewed: 15 February 2017 Published: 30 August 2017