Open access peer-reviewed chapter

Human Behavior Modeling: The Necessity of Narrative

Written By

Roger Parker

Submitted: 18 April 2019 Reviewed: 06 May 2019 Published: 07 June 2019

DOI: 10.5772/intechopen.86686

From the Edited Volume

Computer Architecture in Industrial, Biomechanical and Biomedical Engineering

Edited by Lulu Wang and Liandong Yu

Chapter metrics overview

1,198 Chapter Downloads

View Full Metrics

Abstract

As progress is made in the development of artificial intelligent mechanisms to assist human research into aspects of industrial, biomechanical and biomedical engineering, the conceptualization of mental behavior of human entities become more vital and more central to the success of any interaction between machine and humans. This discussion explores one of the most important features of human behavior, the fundamental and irreversible concept of narrative. The narrative is the essential construct for the theoretical understanding and presentation of human communication, including formal and informal logic, emotional wonder and desperation, noble and selfish biases, nationalism and globalist politics, and any form of spiritualism. This presentation offers a working definition of human narrative and proposes its basic structure that must be represented by any computer system which is required to deal with human behavior.

Keywords

  • human behavior modeling
  • narrative definition and structure
  • computer-human communications
  • logic modeling
  • bias modeling
  • emotional modeling
  • human agent-based modeling

1. Introduction

Artificial intelligence has made significant gains in operational context in the last two decades. One area of success has been demonstrated by the widespread implementation of agent-based modeling (ABM). Used extensively in a variety of contexts, there has been increased interest in the implementation of ABM’s where the agents represent the behavior of individual or groups of human beings in a social context. This leads to the application of the irreversible concept of narrative [1]. The narrative is a fundamental concept for the theoretical understanding and presentation of virtually any kind of human behavior. Our discussion here presents an abstract, yet working, definition of the narrative. A basic structure that must be incorporated into any computer system which is required to deal with human behavior is offered, and the implications of that structure in a variety of human contexts are examined.

This approach to agent-based modeling has been in development for nearly two decades now. The first demonstration of the context in which a narrative became an essential part of a software system was created by AirMarkets Corporation, a collaboration of individuals from the air travel industry. It simulates the behavior of individuals and groups that have elected to fly from one city to another, including when they are going to fly, how much each is going to pay, and how likely any particular flight option is [2].

The work supporting the initial formulation, resulted from extensive research by the author and his associates, culminating in the writing and publication of a doctoral thesis under the tutorage of global experts at the University of Technology in Sydney, Australia [3]. At that time artificial intelligence was still in its infancy, and the development of pattern recognition and the emergence of artificial intelligence technology based on such capability has since argued for the consideration of human-replacement computer systems. But artificial intelligence technology has a great deal of work to do before it can even come close to replacing ordinary humans, or even a broad class of animal life, in any kind of depth. This presentation gives a substantial indication of how far we have to go.

The next section of the discussion describes the narrative construct. The notion of event, time, and associated probability are essential concepts. We then incorporate these important narrative ideas used routinely in mathematical and computer models and simulation. These concepts are exemplified by a range of rational choice models, common to artificial intelligence computer development, but viewed from the perspective of human behavior modeling. This approach is illustrated briefly in the next section, which describes in overview form the AirMarkets Simulation, which is a computer program that represents the booking of over 40 million travelers on the world’s commercial airline system using an agent-based, narrative structured methodology. We then move to a survey of heuristic choice model concepts, followed by an exploration of even more human—and less computer—narrative structures, including social network protocols and sources of bias in human narrative execution.

We close with a review of future needed work, including mention of several mathematical theorems that limit the applicability of relevant software development regarding computer-based narrative software and distinctions between human and computer-based analysis that such analyses imply.

Advertisement

2. A formal notion of narrative

The objective of this research is to develop computing machine ability to represent behavior in an agent-based software architectural context, so we can build agent-based models of activity such as market behavior, political contexts, or social interaction. Our first consideration is the creation of a framework for describing how human behavior can be characterized in a fashion consistent with observed patterns while dynamic enough to describe actions and activities that exist only, at least at some point in time, in the imagination of the individual. And time is an essential feature of such a characterization, since all behavior (of virtually all mobile biological entities) is selected, observed, enacted, realized, or even is cited as existing occurs over some distinct and evident time period. In addition, we want the description of the framework to be able to differentiate between what happens within the creature, such as choices based on memory or mental understanding, and external to the agent, such as weather and the behavior of other agents. That is, the construct must describe the relationship between what an agent perceives the world to be and what the world actually is.

Virtually all animal organisms show evidence of memory-based pattern creation and maintenance. In his fun discussion Montague [4] makes the case that virtually all biological entities that can move about has some ability to forecast at least the immediate future, and therefore must have at least a basic mental image of what the effect of its future activity is likely to be. The creature compares the perceived state of its current environment with its internal understanding of it and uses a cause-and-effect chain to determine what it expects the future state of the environment will be if it undertakes some appropriate activity. The phrase cause-and-effect refers to an internal conception by the creature that the state of affairs perceived by the creature as currently the case is due, in large part, to relationships between things that occurred in the past and the current state, and the anticipated future state of affairs that will exist if the creature does some appropriate thing. It is this internal, mental construct that we call a narrative. Indeed, for the artificial intelligence purposes of this discussion, the definition of narrative that applies to the modeling of human agents is as follows: The term ‘narrative’ will be taken to mean a pattern maintained internally by an agent that represents temporal cause-and-effect chains which define events that are perceived by the agent, with which the agent ‘understands’ the events, and based on which the agent takes action. Since narratives are temporal cause and effect chains maintained in a mental structure, they are essentially independent of the reality they represent.

Narratives are favorite topics of people who are interested in human creative activity, so much of the insight into their nature come from authors, poets, playwrights and composers. The term narratology is used to identify such discussions. Danto [5] defines the term atomic narrative to be a single event with related past and future. Figure 1 is an illustration of the concept. Note that there is past and future narrative time, and the event has a history and a range of outcomes.

Figure 1.

Schematic conceptualization of an atomic narrative.

This is the state space that obtained before the event occurs. To the right is an arc representing the set of possible outcomes of the event. An atomic narrative can have a set of discrete outcomes, a continuous range of outcomes, as shown in Figure 2, or, at least theoretically, a mixture of the two. The history here, of course, is history in narrative time, and compared with history in real time, is woefully incomplete. That is, there must exist states in real time history that are not in a narrative history, no matter how thorough the recording of that history might be. The actual content of the existence of reality is never completely known to humans because we are unable to completely absorb and understand a description of reality in a finite interval of time.

Figure 2.

Discrete and continuous outcome sets.

That there can be more than one possible outcome is a vital property of the event. An event occurs when something in reality changes because the agent recognizing and responding to the event promulgates the change. The change is actually realized as an event outcome. There is a family of probability distributions associated with the outcome set of every event. This family is a stochastic process indexed by a set of narrative variables referred to as resources and is thus called the event stochastic process. The stochastic process can be denoted PΛ(Y|X) which is the probability that outcome Y occurs given history X and resource allocation Λ. A molecular narrative is simply of a sequence (in time) of atomic narratives and a narrative can be made up of multiple molecular narratives that are simultaneous in narrative time. A narrative that contains such multiple molecular narratives is called compound and is illustrated in Figure 3. As real time transpires, the course of the narrative is realized, which is illustrated in Figure 3 with the solid line connecting the relevant events.

Figure 3.

Molecular narratives.

Thus, narratives are memory structures that are retrieved under a currently perceived state configuration. The retrieval is not only representative of the present, but also reflects the relevant past and future. The future portion of the narrative is created from the expectation property created by the probability associated with the narrative. If one outcome results, the narrative will go one course, while if a different one results, it will follow a different path. The narrative carries with it a description of the expected consequences of these event outcomes. Thus, the description of the future consists of a set of branches, each opening a new course for the narrative to take. Furthermore, there is a probability distribution associated with each potential outcome which dictates a probability that the branch initiated by that outcome will be realized.

It should be noted that the stochastic structure of the probabilities cannot be assumed to be simple. The likelihood of outcome A vs. outcome B may depend on outcomes that were the result of previous events or sets of previous events. More succinctly, every atomic narrative has a probability distribution associated with its outcomes, but the distribution associated with the outcome of the last atomic narrative in a molecular narrative is not independent of the other atomic narratives which make up the molecular narrative. Therefore, every narrative has a past which contains the information required to support the future outcome stochastic processes.

But the narrative past is not a completely accurate description of what lead to the present and is conditioning the future. It has inaccuracies caused by imperfect recall and incomplete data. Memory deteriorates as events recede into the past, and hence there must be a probability associated with the accuracy of the recall, and hence the past of a narrative also has an associated stochastic process structure. This is illustrated in Figure 4. The past and the future are shown as the widening ends of a hyperbolic cone representing the increasing uncertainty surrounding the past and future of the narrative, and the present is the narrowest part of the hyperbolic surface. The small circles connected by the lines inside the hyperbolic solid represent the events of the narrative and their relationships, as perceived at that point in the narrative. These event conditions change from past to future, but the degree of uncertainty in the narrative increases as the past recedes and the looming distant future is contemplated. This results in the loss of resolution of the event description.

Figure 4.

The time structure of narratives.

The individual that creates and maintains a narrative is called its owner. All narratives have owners, and all narratives are unique, to a greater or lesser extent, to their owners. But narratives are also shared. Language and communications are important adaptive tools for the human species and a vital purpose served by language is the sharing of narratives. Indeed, some suggest [6] it is the sole purpose of language, since all communications in his view is the description of a narrative. Shared narratives are not, however, perfect copies held by each individual that is, part of the sharing group. Each individual owner at least modifies the shared narrative structure to fit their own unique set of compound narratives that include the shared one. Moreover, shared narratives become institutionalized into laws, codes of conduct, norms, and other social constructions that serve to assist in group adaptation and evolution. Shared narratives are a very powerful social and evolutionary tool. In fact, it could be argued that the interconnections of the atomic narratives that make up shared compound narratives may set the framework for the multilevel selection adaptation evidenced by group evolution. While beyond the scope of this discussion, future research in this area using computer-based agents designed with the narrative construct might yield very useful results. It also should be noted that other modes of communications aside from formal speech are also available both to humans and to other species. Indeed, the question arises as to whether or not there is some form of communications available to plants?

It is reasonable to conclude that narratives are mechanisms that result from human evolution. Much like the arguments presented by Sober and Wilson [7] and Shermer [8], the narrative hypothesis can be reinforced by evolutionary feasibility and their proven success as an adaptive device in support of both individual and group survival. This leads to the inference that the outcome of a narrative is either desirable or undesirable. Some are very simple, like what to have for lunch. At the other extreme, compound narratives that support religious beliefs and institutions can be invoked to portray a future where death is merely a change in physical state from this world to another, which may be Heaven (desirable), Purgatory (not so well favored) or Hell (clearly undesirable). Given this property of desirability, narratives then can be considered to be the vehicle by which value is expressed by the individual, and it is from narratives that values arise as identifiable attributes of human behavior.

The narrative framework can be considered as a storage device for not only memories, but also expectations. The role of expectation in narrative structure—the projection of the effect associated with some cause in the future—is quite clear. The maintenance of a set of expectations associated with the various outcomes in a particular event-based choice situation is required for the mental storage of such expectations. It is implying too much inherent human analytic ability to assert that each such expectation set has a formal probability distribution also maintained as part of its mental representation, especially since such distributions would be dependent on the pathway by which the event itself arose, and not only on the fact that the event came about at all. Nonetheless, narratives provide both a context and framework within which expectation can be stored, recalled and manipulated. However, for the purposes of modeling human agents, the expectation-storage property is important. Indeed, one of the most useful applications of an agent model might be that both the “correct (logical)” application of probabilities, or the “deficient (logically false but commonly held),” expectation distributions can be modeled, and the results as expressed in specific choice contexts compared.

A human individual is part of a world and is in constant interaction with it. The intervention in the real world by the agent occurs by means of the allocation resources to the event at hand in order to alter its perceived probability distribution of the outcomes, either toward those it favors, or away from those it dislikes. In this way, the values held in the narrative are expressed in action. Since compound narratives are molecular, and thus made up of sequences of atomic narratives, any attempt by an agent to affect what it thinks will be the desired course of a narrative involves choosing a specific outcome from the set of outcomes of the narrative representation of that event. And it must do so one event at a time. Otherwise contradictions, uncontrolled feedback, process deadlock or other dangerous anomalies would emerge. Thus, in the narrative framework, the relationship between the narrative owner and its environment is always an event, and therefore always a choice problem. And there are a number of other issues of interest, but beyond the scope of this discussion. What about the time it takes to select outcomes? What happens during that selection process? And what about the cost of being wrong? Or how does the narrative accommodate the shifting environment.

Notice there is absolutely no requirement that any of the events or sequences of events described by a narrative be true, in the sense that there is Karl Popper’s intersubjective verifiability of narrative content [9]. Indeed, everyone has experienced erratic and unpredictable behavior on the part of others. Such behavior is seen because the narrative driving the erratic individual is different in some important way from what the observer expects, given her constellation of narratives. Since narratives exist as neurological entities (fundamentally memories), study of their physical existence and attendant properties resides in the domain of the neuroscientist. But what is represented by the narrative pattern, and whether or not that has any reality outside of the narrative itself, is an entirely separate question. It can be as fanciful or factual as the owner wishes and is capable of managing. The fact that fanciful narratives can be as real as factual ones creates the conditions on which wonderful fiction can be portrayed, both in books, on television, and in the theatre. It also creates the capability for popular politicians and dangerous authoritarians to put forth absurd and violent behaviors as within human and normal perspectives. Simply review the history of political activity in Europe prior to World War II.

With respect to building a computer program, there is a minimal organization required for the implementation of a narrative in the context of agent-based modeling. The general definition of agent has been presented in Parker and Bakken [10] and in Parker and Perroud [11]. For our purposes, agents will always be implemented in the form of computer programs. Within the agent are mechanisms for perceiving the environment, making choices about what to do given the environmental information and from the context of the agent’s motivating narrative, and taking actions that advance the agent’s narrative. This simple agent structure consists of four components and is illustrated in Figure 5.

Figure 5.

The general structure of an agent.

Corresponding to the memory for a human being, the state vector is an array of variables which describe the current state of the agent. Referring to this array as a vector does not imply that it is necessarily a precise set of real numbers laid out in a row or column manner. Structures more complex than simple numbers can be specified. However, it does imply the memory object is both precisely well-defined and finite. The state vector also encodes the appropriate narrative structures that represent the beliefs and aspirations of the agent being considered. For example, it contains the probability distributions of the outcome sets relevant to the events the agent will encounter during the simulation. The state vector is maintained as appropriately defined data structure.

Like the ability of humans to receive, filter and understand information, a computer agent can receive information about the current conditions in its environment through the code component called the perceptor. Almost without exception, perceptors are message-handling routines the purpose of which is to ‘observe’ the current state of the agent’s environment, and filter and translate that information into a form that can be used by the choice-making component of the agent, converting the messages from the environment into an internal form of use to the agent. This internal translation is unique to each agent, which thus allows for agents that interpret the same external message differently. This would be important if different agents have different narrative events that were triggered by the same external environmental conditions.

The actions of the agents to external messages, both those which are put into the environment as output messages by other agents, those required for changes in the internal state vector of the agent, and those that require the attention of other agents, are managed by the ratiocinator component. The ratiocinator makes choices and adapts the behavior of the agent. This is the part of the agent which replicates how the human intersects with the simulated world in accordance with the then-active narrative structure. How such stochastic choice mechanisms are implemented are within the purview of the agent’s ratiocinator.

When it is required that an agent create input to other components of the agent-based simulation, it can issue messages out to the environment by way of what we call an actor. It specifically engages the agent environment. Messages intended for other agents are detected by those agents as they interact with the environment.

For even the simplest case of the atomic narrative with a single event, the agent components must contain significant data. The perceptor needs to be designed to recognize the occurrence of the event and the perceived state of the environment at the time of the occurrence. The ratiocinator must be programmed to perform one of a set of choice protocols the agent will apply to exercise the choice required by the event, including the availability and allocation conditions of the resources at the agent’s disposal. And the actor must have, in its repertoire of possible actions, those that are suitable given the event perception and protocol requirements. If the narrative construct driving the event recognition and intervention is a molecular narrative, with a number of events connected in a time-ordered and contingency network, then each atomic narrative must be delineated as described above. Furthermore, the connections between the component events must also be precisely represented in order to reliably represent the agent’s actions.

This definition of agent contains the basic outline of all the pieces of the programmer’s art necessary to build and execute a virtual market simulation. At first glance, these requirements may seem onerous. But in cases where the construct has been applied, the problem reduces itself to a tractable, if perhaps complex, computer algorithm programming task. As with all computing programs, two elements are present: the data on which the algorithms operate and the computing code which executes the algorithms themselves. Looking at the agent definition from that perspective, the state vector holds the data, and the perceptor, ratiocinator and actor consist of algorithms. From experience to date, and from reflection of how a particular agent behavior might be implemented in a variety of other contexts, the programming of the choice protocol set that resides in the ratiocinator seems the most daunting.

Why do care in the least about narratives? Because they represent—in fact are the existential structure—of what us advanced species of humans refer to as models. Now let us look at that!

Advertisement

3. Modeling and simulation

Human beings think in terms of mental models of the world around them and their relationship to it [12]. We formulate concepts and ideas and link them together to represent the way we think the world works in some significant regard and use those representations to make decisions on our future actions. These models can range from simple statements of assumed cause and effect—“if I step out in front of a moving bus, there’s a good chance I could be seriously hurt”—through physical scale models of buildings or vehicles in their design stages through mathematical representations of complex social or physical systems. Common to models of whatever composition or subject is that they are abstractions, and therefore simplifications, of reality, retaining what is believed to be salient features of the problem at hand.

We are only concerned with models that are represented mathematically, either in the form of one or more equations or as a computer program. Consider the simple “What if?” scenario analysis applied to the results from a conjoint analysis, or more elaborate “war games” in which competing teams of managers (or MBA students) develop and implement strategies in an interactive fashion. An underlying principle of modeling is the representation of one process or set of processes with a simpler process or set of processes. The Monopoly® board game, for example, reduces a complex economic system into a limited set of transactions. Part of the fun of playing the game derives from the degree to which the outcomes (wealth accumulation and bankruptcy) resemble the outcomes from a real-world process that is, sort of being simulated.

An important concept regarding simulation is time. This temporal property is an essential characteristic of process-representative simulation and differentiates the application of simulation as an analytic and scientific tool from many other approaches, such as deductive logic or statistical inference. As noted in the previous section, the central role of time is also an integral part of agent-based model simulations. With an express representation of time, the dynamics of a system can be explicitly studied.

In many circumstances, the phenomenon under investigation cannot be ethically or safely subjected to experimentation. The study of disease epidemics, social intolerance and military tactics are obvious examples. In many situations the scale of the process under study prohibits any other approach. In astronomy and astrophysics, the universe is not available for experimental manipulation, but a simulation of important aspects of it are available for such study. Similarly, explorations of cultural development or species evolution cannot be executed with physical laboratory environment, while a simulation permits hypothesis testing and inference on a reasonable time scale. Finally, some systems are so complex that traditional experimental science seems hopeless as a research approach. Among these systems are ecological dynamics and evolutionary economics.

Figure 6 lays out the “ethnology” of mathematical modeling in general and simulation modeling in particular, starting with the invention of the calculus in the 1700s and continuing up to the present day. (This diagram is adapted from Gilbert and Troitszch [13].) The two broad categories of stochastic and deterministic simulation models are indicated by the shaded ovals. The bold face labels define the mathematical contexts of the various modeling formalizations, while their genealogy is spelled out with the lines. As illustrated in this diagram, agent-based simulation belongs to the class of stochastic simulations and descended from a form of simulation called discrete-event simulations. Discrete event simulations are stochastic simulations that attempt to mimic the behavior of the discrete parts of a system and their interaction. As indicated in Figure 6, a number of other varieties of simulation have evolved from the discrete event form.

Figure 6.

The ethnology of mathematical modeling and simulation.

Also contributing to the heritage of agent simulation are game theory, artificial intelligence and cellular automata models. Some of the early applications of agent-based models were in the area of rational game play. Axelrod’s [14] experiments with the prisoner’s dilemma game is the classic example and is cited by virtually every practitioner of agent-modeling somewhere in their writings. Artificial intelligence models are a key component of the development of robotic systems (Wooldridge [15] is a chief proponent). If the behavior modeled by the discrete event simulation contains stochastic elements, as it usually does, then the simulation can be run repeatedly using random numbers generated by the relevant probability distributions, and the distributional characteristics of the resulting dynamics portrayed. In fact, for a system of even moderate scope this is the only way such dynamics can be validly studied. The ability to express dynamics through ordinary or stochastic partial differential equations can quickly exceed any reasonable closed form.

Mathematical models can be classified into two forms: reductive models and structural models. In a reductive model, a set of data is reduced to one or more equations or similar relationships that represent the data, but in a simpler, more parsimonious form. There is some loss of accuracy, the price of which is greater understandability. Linear regression is a ready example, where a set of pairs (or n-tuples) of data is replaced with an equation representing the relationship between the data values of one element of the set, called the dependent variable, as a linear combination of some collection of the other (n−1)-tuples, called independent variables. Structural models, on the other hand, are intended to be representations that not only reproduce patterns of observed data, but also characterize the process, or structure, by which the variables represented by the data relate to one another. Reductive models are not required to completely replicate the process by which the observed data is generated. They need only reproduce the observed results in a more economical and parsimonious way. Structural models focus on the way in which the observed values come about.

A good deal of applied mathematics and a substantial part of modern statistics is aimed at easing the problem of the generation of reductive models. Beyond linear regression, there are literally thousands of other applied techniques; Fourier series [16], time series analysis [17], discrete choice modeling (e.g., [18, 19]), and proportional hazard models [20] to name only a very few. Indeed, the so-called Stone-Weierstrass theorem in classical abstract analysis [21] describes a set of general conditions under which an arbitrary data set can be approximated to any desired degree of accuracy by a broad class of simpler functional forms. And reductive models are powerful and extremely useful tools. In econometrics, for example, reductive models are widely used for forecasting [22, 23]. For a thorough treatment of the structures and formal aspects of mathematical models, see Chang and Keisler [24].

But in the “harder” sciences like physics, reductive models give way to structural models. Newton’s laws of motion, or Einstein’s theories of relativity, are mathematical constructs which purport to not only represent the results of data sets arising from observations of natural phenomena, but also how the objects in the natural world interact with each other to generate the observed data. That is, the models do not merely represent the observations, but also describe the process by which the observations come about. As such, they carry more explanatory weight than reductive descriptions. They are more likely to be valid beyond the range of the initial observed data sets that lead to their formulation. Also, they have repeatedly been shown to be robust across varied data sets and when connected to other models which represent related systems.

Simulation is one of the more powerful structural modeling techniques. A simulation is a model which, by design, represents the relationships between entities in a system. In particular, a simulation captures the dynamics of the relationships between components of a system, reflecting how changes in one component create changes and affect the response of other components. Humphreys [25], in fact, defines a simulation as a structural model that explicitly includes time as a dimension, so that the dynamics between the variables described in the model can be appropriately portrayed.1 The increasingly wide spread use of simulation also is tied to the development of computing power.

Note that the use of the word simulation here as a modeling technique should not be confused with the phrase when applied to certain methods of finding solutions to equations. For example, computing the volume of a complex solid in a multi-dimensional space, such as the area under a multi-dimensional normal probability distribution, can be done by generating a very large number of points in the relevant space and determining the ratio of those in the solid vs. those not in the solid.

Modeling and simulation which incorporates a definition of time (unidirectionality, non-repeatability, uncontrollability) are representations of narratives. Fisher incorporates the rationality of traditional logic into the narrative paradigm (with a somewhat critical slant):

“Narrative rationality is not simply an account of the ‘laws of thought,’ nor is it normative in the sense that one must reason according to prescribed rules of calculation or inference making. Traditional rationality prescribes the way people should think when they reason truly or toward certainty. … Traditional rationality is, therefore, a normative construct. Narrative rationality is, on the other hand, descriptive; it offers an account, an understanding, of any instance of human choice and action, including science.” [6].

How does this narrative construct relate in the modeling of human agents? As was stated earlier, the narrative provides the mechanism for separating reality from what the agent thinks is reality. That is, it defines the context, values and resources required to change the realization of a narrative to a desired outcome. Many consider individual narrative discovery as the central problem of marketing research, as exemplified by the strong methodological presence of ethnology in some marketing research quarters, such as ESOMAR (a European society of marketing research). It is clear that the choice process is a vital aspect of any such description. In fact, the narrative framework provides an ontological justification for pursuing the study of choice as the critical component of replicating the behavior of humans in agent-based models. It is not necessary to know the full constellation of narratives maintained by an individual to incorporate the concept into agent models. It is often sufficient, at least as a starting point, to model a few essential atomic narratives.

Narratives are needed for the construction of agents in a computing context because the distinction between the reality of the environment that an agent finds itself in and the perception and interpretation of that environment by the agent must be kept clear. Formally speaking, all narratives are models.2 And, since a narrative is a sequence of events, there exists a finite sequence {1, 2, 3, …, k} such that each member of which has an associated stochastic process Ei = PΛ(i)(Y|Xi). So a narrative can be expressed as the ordered k-tuple (E1, E2, E3, …, Ek), representing the possible outcomes of the narrative execution.

Narratives are to a significant degree a product of evolutionary history. As we observe human society across multiple cultures we should repeatedly encounter common behavioral attributes, since the environment in which humans have evolved have many essential elements in common. Moreover, we can then assert a behavioral universality that would support hypotheses that could be tested across cultures. Finally, if such universality can be supported, then the construction of agents that replicate important behavioral characteristics can be expected to be applicable in a wide range of contexts.

A thorough cataloging of human behavior patterns in cultures around the world has been assembled by the anthropology community. The University of Illinois at Urbana-Champaign maintains the Human Relations Area Files (HRAF), an organized and indexed compilation of every reported ethnographic study of human culture (UIUC [26]). Many scholars have used this resource in recent years to undertake cross-cultural studies of human behavior. Brown [27] has compiled a list of patterns of behavior that have been recorded in every human society that has been studied—anywhere in the world, large or small, old or modern. Brown focused on the hypothesis that a number of features of human culture would be found in all human experience, regardless of time, place, or history. When originally published, his views were sharply opposed to the prevailing anthropological wisdom. At the heart of the controversy was the nature-nurture debate, which still circulates actively today. Brown spent considerable space refuting the concept of cultural relativism, which holds that human cultures are vastly different with limitless variety.3 Further, culture completely determines human behavior, and therefore there can be no human universals. (For example, by taking a contrary view, Brown contradicted the great anthropologist Margaret Mead.4) This position is important for this analysis because, if there were no human traits that were independent of culture, then the problem of simulating human behavior with agents becomes extensively more difficult. In that case every culture would have a unique heritage and historical path, making generalization very difficult. Brown’s refutation of the relativistic view of human behavior is therefore valuable to the arguments justifying agent modeling in computer science. If we cannot characterize human behavior in some reasonably perspicacious and parsimonious way, the task of defining human simulation agents will be significantly more onerous.

Advertisement

4. Rational choice protocols

Narratives are also the mechanisms by which agents communicate with each other—so-called shared narratives—and understand the world around them. They need not, however, be true descriptions of the world. All narratives are compounded from sequences of atomic events (are molecular narratives) and thus the choice process of the atomic narrative is the key focus. The set of choice protocols can be classified into four broad groups: (1) rational methods, including various concepts of bounded rationality, (2) heuristics, which are quick and easy (but often very inaccurate) rules-of-thumb, (3) social network protocols, which rely on communications between individuals to make choices, and (4) biases, which are significant errors often found in choice-making.

Consider what are usually referred to as protocols for rational choice. Perhaps the easiest of these would be rule-invocation methods. This is the situation where the agent making the choice has an available set of rules, and, depending on value of the state space when the event is encountered, one or more of these rules are used to determine the choice. Rule-based agents are widely used in agent-based models. Axelrod [14], Epstein [28], and Wooldridge [15] insist that rule-based agency is the wisest course of agent construction. This press for simplicity is in response to the need to explore and understand some of the unusual emergent results that are observed with agent-based models. A complicated agent structure makes analysis of such emergent structure much more arduous. And such a protocol is trivial to build into an agent. Merely specify the action to be engaged for each appropriate set of state space variable values. But this is not a choice protocol, since the outcome of the choice is predetermined by the rule set. It’s a pre-defined action invocation, and since there is no probability associated with the rule-invocation, there can be no associated narrative event. Therefore, this kind of choice mechanism is not within the purview of agent-based models as defined here, which require such a stochastic mechanism.

The classic statistical decision problem is perhaps the oldest, and most ‘rational’ of this class of choice protocols. The fundamental problem of statistical decision theory is to select a possible action from a set of actions that minimizes expected loss. The loss function L a θ associates a real-valued number (the loss) with an action a in some set of possible actions A and a state-space value θ Θ (referred to as the state of nature in the statistical literature). The triple Θ A L represents the statistical decision problem. Generally, the choice at hand is which value of θ represents the “true” state of nature. Nominally, there exists empirical data represented by the random variable X (which could be a multidimensional entity), the probability distribution of which, P θ x depends on this true state of nature. A decision rule d for a given value of the random variable X, say x, maps the value of one of the actions in A to x; that is, d x A . The loss is therefore the random quantity L θ d x , and the expected value of L θ d x , when θ is the actual state of nature, is called the risk function

R θ d = E L ( θ d X ) = x X L θ d X dP θ x . E1

The choice problem is to select the decision rule d from the set of all possible decision rules D that minimizes R. If it is assumed that each d D is such that, for each θ Θ , the distribution function F X x θ is continuous on a set of probability one, then the above is the Lebesgue integral

R θ d = x X L θ d X dF X x θ . E2

Ferguson [29] delineates a conceptualization of much of the field of statistics based on this definition, coupling it in with a game-theoretic construction. In fact, the triple Θ A L is a formal game in this sense.5 It is easy to see that this is a quite well-defined problem. However, implementing that definition in a specific context can be a considerable endeavor. On the other hand, creating a computer routine to implement a statistical decision rule in an agent does not seem to be conceptually prohibitive, although it might require significant time and resources to build and execute.

In a general sense, all “rational” choice protocols are described by the statistical decision process defined above. Indeed, some would consider this formulation the axiomatic definition of a rational decision. This general formulation says nothing about the nature of the decision rules d D . They could be hugely complex or trivially simple. Moreover, the set of actions A can be finite or infinite. Much more familiar to the economic community is the choice process described in the random utility discrete choice case. The general formulation of this family of protocols is as follows. There exists a finite set J of possible choices with #(J) being the number of elements in the set J. Assume that the choices and the choosers are characterized by a vector of variables zij for decision-maker i and choice j. Each decision-making agent has an associated real-valued function U i z ij : J R that assigns a utility to each choice. The alternative with the highest value of Ui is defined as the choice made, that is, the value j* for which

U i z ij = max j J U i z ij E3

This utility function is assumed to have an observable part V i z ij and a stochastic component ε i z ij , and is therefore written as:

U i z ij = V i z ij + ε i z ij . E4

Very often the ε i z ij terms are assumed to have an Extreme Value Type 1 distributed with common location parameter γ (which, without loss of generality, can be set to zero) and common scale parameter μ. Then it can be shown ([17] p. 106) that

P i j = e μ V i z ij j J e μ V i z ij , E5

The general discrete choice problem is based on the assumption that there exists a set of alternatives J with finite number of elements #(J). Furthermore, the agent making the choice has determined a complete preference order over the elements of J. A complete ordering on a set is a relation having the following conditions: [the idea is easier to understand by writing a b for a b , read as “a is less than or equal to b.”]: (1) antisymmetry: if a , b J and a b , and b a , then a = b; (2) reflexivity: for every a A it is true that a a ; (3) transitivity: If a b and b c then a c , and (4) completeness: for every pair a , b A it is true that either a b or b a . If the preference order fails to meet these conditions, then the utility function does not necessarily exist, and the discrete choice problem cannot be formulated in a utility maximization context.

However, human beings are not bound by the definitions of preference orders. Non-transitive, circular orderings are common. For example, when Mary is asked to choose between chocolate and vanilla ice cream, she selects chocolate. When asked her preference between vanilla and strawberry, she chooses vanilla, and when asked her choice between strawberry and chocolate, she prefers strawberry. The ordering is not transitive. This situation occurs because humans generally determine orderings pair-wise over some (possibly quite short) period of time, and the circular inconsistency is quite easy to manage if the time of the preference comparison can vary. Moreover, there’s no reason why the completeness property needs to be met in real-life situations. (An ordering that does not meet the transitivity and complete conditions is termed a partial ordering.) Fortunately, it is possible to derive a set of completely ordered sets from any partially ordered set (by considering each completely ordered set as a separate entity, and ignoring singleton sets), so the utility maximization problem reduced to a bookkeeping issue, (assuming there is sufficient data to estimate the number of models that might arise). In some cases, the collection of completely ordered sets can be represented in a hierarchy. But the point is that agent designers do not need to insist on complete choice sets. In fact, any kind of pairing relationship can be used as a preference ordering, and each can have a unique (empirically derived) utility function.

The domain of rational choice models is not exhausted by the utility maximization of a discrete choice structure. Indeed, most choice situations are not even discrete, often requiring the selection of a parameter vector from a multiple-dimensioned real-valued vector space. Other methods are called upon here. Bayesian statistics have faded in and out of fashion over the past two centuries. Of special note are simulation approaches to statistical parameter estimation, confidence interval determinations, and hypothesis testing. In this context, the meaning of the word simulation is somewhat different from when it is applied to an agent-based model. What is referred to in this case is actually artificial sampling, where data values are generated with a computer from a known probability distribution, so that complex and otherwise intractable parameter estimates can be determined without expensive, perhaps even impossible, data collection.

The significant power for efficient allocation of resources in aid of narrative fulfillment represented by these rational protocols gives them superior positions in the pantheon of choice processes. It is this superior performance, historically incontrovertible, that suggests an ever-widening venue of application. But most humans cannot enlist the aid of these methods without extensive training and the assistance of a variety of other parties. Even as powerful as they are, they are bounded by the time and other resources required for their utilization in the face of the urgency and importance of the particular choice problem at hand. That is, they are examples of bounded rationality, in spite of what they may seem. Bounded rationality is almost always portrayed in contrast to the messianic alternative of the demonic methods noted by Gigerenzer. This “Laplacian Daemon” is the all-seeing, all-knowing supreme intelligence that can solve any resource allocation problem and select the globally best option for all individuals for all time. Fortunately, but beyond in the context of this discussion, it cannot exist.

All of the rational choice mechanisms mentioned above are available for the specification of agent choice protocols. All have computer programs that fully specify how they should be executed, what the data should look like, how the results should be presented and the limitations on and conditions of their application. Moreover, it is clear that a number of organizations and institutions make extensive use of these methods. Companies routinely use operations research for a variety of optimal resource allocation tasks. Perhaps one of the more interesting and successful applications of operations research is the revenue management process used in the sale of tickets in the airline industry, now being extended to similar perishable goods such as hotel rooms, rental cars, and theater seats.

But the individual human being does not routinely engage such mechanisms in making choices. In fact, as noted above, they are very likely to be reductive and not structural models, and therefore may describe no actual process found in real world. There is no evidence that human beings actually make routine choices using any of these tools. Humans tend to employ much simpler approaches to day-to-day choices, and in many instances extend these simple protocols to serious, far reaching and life-changing circumstances where the more sophisticated, rational methods would seem to be called for. Given that an agent model of the human decision-maker must describe what the modeled human actually does, and not what it could or should do, these less rigorous and more ad hoc choice protocols must also be made available to the human agent modeler.

Advertisement

5. An example: the AirMarkets simulator flight choice model

A specific example of an agent-based, rational choice model is the AirMarkets Simulator, which is a representation of the narrative structure and related choice protocol which portrays the behavior of customers selecting from a set of alternative air flight choices. Because the available flights at any point in time is partially a function of the choices made by others previously, a run of the simulation consists of all individuals (or groups traveling together) in the world traveling on commercial air service for a week’s time period, with travel bookings starting 120 days before the subject week. (That time period insures that no flight is unavailable at any time before the 120-day booking period, so all individuals booking before the start can be done at once.) About 27 million travel individuals or parties are booked on each simulation run, making up about 42,000,000 individual customers.

Each travel party chooses from all available service connecting the desired origin to the desired destination. The alternative is chosen using a random number generator to produce a probability between 0 and 1, and then the probabilities associated with the available options are examined in arbitrary order to determine which is selected by that particular customer. The utility shown in Eq. (4) above has the following form for the value of V(i, j), where i is the indicator of the customer and j the indicator of the air travel option:

V i j = β f i ln f j + d j β d i + β bd i ln d base + β dc i N dc j + β ic i N ic j + β 1 st i X 1 st j + β ec i X ec j + G τ i t j E6

In this equation, the β’s are coefficients that reflect the values carried by the traveler with index i associated with the of the flight option reflected by the attribute denoted by the j-value. For example, βf(i) is the value for traveler i with respect to the fare f(j) associated with flight option j. Other important flight attributes include travel time, d(j), the shortest flight time dbase, the number of stops in the flight that are associated with allied airlines Ndc(j) and non-allied carriers Nic(j), and the cabin class of the flight option (first or economy). The β coefficients are estimated from extensive empirical data collected for that purpose.

The function G(τ(i) – t(j)) is peculiar in the sense that it represents the desirability of the departure or arrival time of a flight option. (Note that either departure or arrival time is dominant, since the actual flight time cannot be altered by the traveler.) The passenger does not care if a flight takes off (or arrives) between two values a or b. If it is outside this range, then the further away from the a or b value, the less desirable the option is. The function G is referred to as a Box-Cox formulation, and is of the following form:

G τ i t j = β E i t j τ i a + 1 λ E 1 λ E 0 β L i τ i t j b + 1 λ L 1 λ L τ i t j < a a < τ i t j < τ i t j > b b E7

In this equation, t(j) is the departure (arrival) time of flight option j, τ(i) is the desired departure (arrival) time by traveler i, βE(i) and βL(i) are coefficients associated with the traveler i if the departure (arrival) is early E or late L, and λE and λL are empirical values for the traveling population estimated from observed data.

If the G function expresses the cost to a traveler of not departing (or arriving) when desired, then the compliment to it is the preference structure in the travel population of the desired departure (arrival) times. This function is purely empirical, and is denoted by Θ(τ), where τ is the desired departure (arrival) time. Then, in accordance with the probability function illustrated by Eq. (5), we can represent the probability that traveler i will select flight option j over the time period [0, W] (nominally 1 week) is given by

p i j = 0 W e V i j τ i k Φ m e V i k τ i d Θ τ . E8

This gives us an idea of the nature of a rational narrative protocol of use in an agent-based model describing human behavior with a computer system. The data which characterizes the behavior of the 27 million traveling parties moving by air in a typical week around the world is a substantial, but not at all difficult to create or maintain. (It is, however, expensive. Over $2.5 million were spent collecting the data that represents the empirical values of appropriate traveler data.) The AirMarkets Simulator executes on an Intel 8-processor desktop computer in about 35 minutes, with no other activity being executed simultaneously. A complete, detailed description of the underlying structure of the Simulator is given in [3], pp. 156–269.

Advertisement

6. Heuristic choice protocols

The models of rational human behavior implicit in the realization in economic theory briefly described so far—homo economicus—are not the only kind of human choice that is, possible. In fact, there is scant evidence that people behave anything like the optimizing behavior suggested by these protocols. Virtually every scholar who examines the problem derides the idyllic nature of rational, economic humans. Indeed, the economic man model is often more normative than descriptive. While quite useful, as demonstrated by its singular success, applying a normative model ultimately begs the question of how people ‘really’ make choices, and to that extent the economic theory that is, the foundation of discrete choice and utility maximization methods is defective. And it is not necessary with the development of agent-based simulation models. The alternative to economic man is often couched somewhat inappropriately in terms of bounded rationality.

Bounded rationality refers to the limitations in resources available to undertake and perform data collection and analysis leading up to a decision, in other words, the execution of benefit/cost analyses, formal or otherwise. In this regard, two streams of thought are discernable in the analysis of human rational behavior. The one, stemming from game theory explores human decision making as a real-valued trade-off endeavor. Indeed, classic economic theory assumes that all the entities in a given economy converge to such utility function rationality as equilibrium is reached. Among the many tributaries of this line of thought is the utility structure that leads to discrete choice theory and the modern study of consumer choice behavior discussed previously. The second stream is the notion of satisficing as a decision structure. Satisficing is choice-making based on being “good enough” rather than “utility maximizing.” This idea fits into the narrative framework.

Gigerenzer and Selten[30] captures this idea with a simple taxonomy of rational choice. Which he refers to as “visions of rationality.” He breaks rationality down into two broad classes. One he calls demons, referring to the demonic capabilities he views as necessary to carry out rational decision-making in the real world without regard for constraints of time or resources, as mentioned. Demonic reasoning is dissected into unbounded rationality and optimization under constraints. The former is literally applying limitless resources to a decision problem or being presented with a decision problem so simple (such as a statistical estimation problem) that all relevant issues are easily known. The latter refers to concepts such as those frequently seen in operations research, in which the problem at hand has been constrained to become manageable. In this case, however, the choice of the nature and values of the constraints are subject to the same resource limitations as any decision problem, thus only begging the issue of what level of demonic strength is available. Gigerenzer suggests that the bounded rationality side consists of two components; the search for options or alternatives, encompassed under the label of “satisficing,” and the actual choice among alternatives, referred to with the term “fast and frugal” heuristics. The searching activity includes methods for finding options and methods for stopping the search. Some of the search methods Gigerenzer notes include: random search, where the agent explores the decision environment without any apparent organization until time runs out; ordered search, using the validity of environmental cues as they apply to the choice problem at hand as the ordering mechanism; search by imitation using apparent similarities of this decision problem to those encountered in the past (imitation allows us to know where to look and what to look for, but does limit results if the environment within which we are searching is novel or unexpected); and emotions, which apparently act to narrow down the search space in effective, but not well understood, ways. Other search methods readily come to mind, but all of them can be interpreted as being enabled by the narrative context in which the choice event is presented. That is, the search process is governed by what the agent, because of the controlling narrative that is, creating the decision context, considers important to the resource allocation and outcome probabilities associated with the atomic narrative.

Stopping the search is where satisficing comes in—when have we searched long enough and established enough options? When we are satisfied that further research will not add any important alternatives, or when we have no more time to gain additional knowledge? Gigerenzer [31], pp. 129–165, proposes what he calls the probabilistic mental model (PMM) as a construct to account for the satisficing and fast and frugal heuristic choice protocols. In a PMM, the individual puts the choice event in hand into a mental construct of similar choice situations it has encountered in the past, or has learned by one method or another, and uses that context as the satisficing criterion. In other words, people fit decision problems into models that seem somehow appropriate to the problem, make the decision, then modify the model if expected results are not realized. This approach argues that the limited cognitive and computational ability of humans mitigate against a purely analytic benefit/cost structure in favor of agile and adaptive, if less than optimal, heuristic decision rules.

Heuristic protocols are choice mechanisms that rely on relatively little information and rule-of-thumb thinking. There is strong evidence that much of the choice behavior of humans is of this variety, if for no other reason that bounds on available time for decision-making prohibit any other approach. For example, Malcolm Gladwell’s discussion of virtually instantaneous human decision making in his book Blink [32] addresses this phenomenon. Todd [33] offers a simple listing of some of the more important fast and frugal heuristics:

  • When choosing between two alternatives, one recognized and the other not, the recognition heuristic says choose the recognized one. The basis for this heuristic seems to be that recognized alternatives are apt to be more successful, and therefore more likely to be recognized, and thus to evaluate the remaining options using the second most important criterion. He proceeds this way, moving down the prioritized list of criteria, until only one alternative remains. If he gets through the choosing them is a better idea. Note that increased option search can reduce choice efficiency if more recognized options are added.

  • In the take the best heuristic the agent selects the best alternative as measured by one single criterion (e.g., price). Other dimensions which characterize the issue in question are not considered at all. One can see how this fits neatly into the narrative framework if the criterion reflects a resource deemed supremely important for the realization of the narrative, as it becomes the dominant factor in the decision. And different agents may have different criteria for what constitutes “best.”

  • An extended form of the take-the-best heuristic (which in fact can be shown to actually be rational) is lexicographic ordering. This is a multi-dimensional extension of take-the-best. This line of thought has been explored more fully by Tversky [34] with his elimination by aspects approach. Elimination by aspects is a choice method wherein the individual has a set of criteria in mind on which he will evaluate a set of alternative choices. He ranks the criteria from most to least important, and then proceeds to evaluate each alternative against the first criterion. If two more alternatives have equal values according to that criterion, he eliminates all the others from consideration and moves on to the next criterion. If he reaches the last criterion evaluation and he has more than one choice alternative left, he selects among the remaining alternatives at random; that is, engages a random protocol. The phrase ‘lexicographic’ is also used to describe this protocol, since alphabetic ordering is done this way. Tversky also showed the equivalence of elimination by aspects to discrete choice thus moving this seeming heuristic into the domain of the rational.

  • Another fast and frugal heuristic approach that lies on the boundary between pure rule of thumb and the rational choice operations is Dawes’s rule [35]. This is a type of linear result choice method. Evaluate the alternatives against a set of criteria by determining if the alternative is either positive or negative with respect to each criterion and subtract the number of negatives from the number of positives. The option with the highest score is chosen.

  • Other heuristics are described by scholars in several fields, especially cognitive psychology. Kunda [12] offers an extensive array. She mentions the representative heuristic, wherein a choice is based on the similarity of the choice situation a category of choices that have been faced or witnessed before (Kunda [12], pp. 57–89). The determination of similarity is based on characteristics of the situation at hand to the one or more of the attributes that define a class of situations but may differ from members of the class in details. This is conceptually coherent if viewed from a narrative perspective, in the sense that the value of resources and the weight put on the factors which assess that value that play in a narrative may be the among the criteria that define the class similarity. In this sense it is somewhat like the recognition heuristic described above.

  • Another family of heuristic methods cited by Kunda is the collection of statistical heuristics, referring to statistical rules-of-thumb most people seem to have learned and carry around with them. They generally seem to arise as a result of dealing with the pervasive uncertainty life brings. For example, the observation that while having all nine of one’s grandchildren be of the same gender would seem quite unusual to most people, having all three of your grandchildren would not seem that odd. But why? The suggestion is that people apply an elementary bit of statistics to the problem, reasoning that the gender of a child is a fifty-fifty proposition, and, equating that to the tosses of a coin, where tossing nine heads in a row happens much less often then than tossing three in a row.

  • One final contribution to the heuristic array is more subtle than the others. This is anchoring and adjustment ([12] pp. 102–109). This is the tendency for people to base a decision about a specific issue based on a reference to other (perhaps completely irrelevant) situations. That is, some change in an element of the context in which the choice operation is taking place may cause a choice to vary from one instance to another, even though the context change is not part of the choice event. This setting of an anchor (the changing context element) will cause individuals to tend to adjust their choices to be consistent with the anchor, even though the anchor does not bear on the choice event itself.

There are many additional heuristic processes that could be identified, and this would seem to be a fertile area for further research. There is a considerable psychological and sociological literature that should be explored to extract current understanding of the choice mechanisms and formulate computing structures that would be applicable to agent-based models. There is little doubt these mechanisms are used very frequently in many day-to-day choice situations and they should be available to the human agent modeler as much as the more flamboyant rational methods are. But in their implementation their sometime severe bias must also be recognized. That is, just as much a part of the protocol as the actual choice itself.

Advertisement

7. Social network choice protocols

Humans rarely make decisions completely alone. Many choices are subject to consideration and examination by not only the chooser directly, but also by other individuals who are connected to her in some way. Friends, relatives, other respected (or not so respected) experts, celebrities, people in authority, co-workers and many others enter into the choice making process in a host of ways, and with a variety of consequences. Only a few of such mechanisms can be considered here.

In a sense, social network choice protocols are somewhere between the rational approaches and the individual heuristics. Gigerenzer poses the dilemma:

“In many real-world situations, there are multiple pieces of information, which are not independent, but redundant. Here Bayes’ rule and other ‘rational’ algorithms quickly become mathematically intractable, at least for ordinary human minds.6 These situations make neither of these two views [laws of probability, and reasoning error] look promising. If one was to apply the classical view to complex, real-world environments, this would suggest that the mind is a supercalculator—carrying around the collected works of Kolmogoroff, Fisher and Neyman—and simply needs a memory jog …. On the other hand, the heuristics and biases view of human irrationality would lead us to believe that humans are hopelessly lost in the face of real-world complexity, given their supposed inability to reason according to the canon of classical rationality, even in simple laboratory experiments.” ([31], pp. 167)

That most people survive without falling into Gigerenzer’s abyss is due in part to social network choice methods. Clark [36] makes a compelling argument in support of this vital role, suggesting that the “scaffolding” of the social network in which all humans are embedded is central to our ability to make decisions, survive and advance. Kunda provides a broad but insightful survey of the field, and Sternberg and Ben-Zeev [37] offer an excellent introduction. The rapid rise of social networking sites on the internet—Facebook, Twitter—testify to both the importance of the social network and the ease with which people adapt to new forms of it.

The development of formal methods of social network analysis has become quite active, as well, partly because of the advances in computing and agent-based modeling. Network analysis as a formal field of academic endeavor dates back at least to Erdos and Renyi [38] but the emergence of the worldwide web has spurred more recent advances, including the exploration of scale free and stochastic network analysis. An easily accessible introductory survey of modern methods can be found in Barabasi [39] or Buchanan [40]. A more advanced and formal treatment is offered by Dorogovtsev and Mendes [41]. Newman et al., [42] have compiled a compendium of more recent developments in the field.

In an agent-based modeling context, networks are an expression of the topology of the explicit space required by Epstein’s definition of agent model (Epstein [28]). That is, a network defines which agent is “close to” which other agent. Moreover, it defines what the word “close” means in a particular model. Epstein and Axtell illustrate the network role on the Sugerscape grid with respect to economic and social interaction ([43], pp. 130–135). As they show, a network in an agent-based model is a communication connection between one agent and another. A single agent can have such connections with a number of other agents. The connection can be one-way or two-way. Different kinds of connections can reflect differences in the nature of the inter-agent communication. And, perhaps more importantly, networks change over time, with new connections being made and older ones dying out.

A convenient way to consider the network structure of an agent-based model is to stipulate that each agent will maintain a list of the other agents to whom it is connected. Separate lists can be kept for different kinds of communications. If two agents have each other in their individual lists, then the network communication link is mutual, otherwise it is just one way. The message posting function of the computer implementation of an agent model can then be engaged to manage the communications between agents during the simulation. However, network structures are not a requirement of an agent model. Space can be portrayed in other ways. In a cellular automaton, agents reside on a grid where communications between agents is based on being physically next to each other on the grid. In the AirMarkets Simulator, simple agent communication networks are used to define the relationships between distribution system agents and airline agents.

Perhaps the most common form of social network-dictated choice protocol is imitation. There is strong evidence human beings learn by imitation, and thus it is reasonable that the same approach would be called on when faced with a new choice situation. What action did others do in this same situation? Very often the narrative event which creates the choice is encountered in the context of a shared narrative, and thus the choice by the individual is apt to follow the course of the underlying narrative supporting the event definition. In terms of agent-modeling, the agent which uses this protocol must be linked through a social network to the individual or group of individuals whom it wants to imitate. The imitation cannot be certain, however, for that is, reserved for consilvocation. There must be a randomizing mechanism that allows for the imitation to have a stochastic element, such as being linked to two or more individuals who can be imitated, but with a randomization device that dictates which one is followed in a particular event.

Closely related to imitation is expert advice. It is natural that someone believed to know more about a particular narrative event—an expert in the field—would make a wiser choice than a novice. And as humans learn as children and adolescents, the courses of action suggested by experts with more experience are important techniques in determining the probability of the outcomes of various choice options. From an agent modeling perspective, clearly a social network link is needed between the agent and the expert. Again, some stochastic mechanism needs to be present if the requirements of the narrative construct are going to be met. In this case, however, in addition to the selection of one of a possible set of experts, whether or not to follow the expert advice can be employed as the randomization method. Taleb ([44], pp. 145–156) examines what he refers to as the “expert problem” in some depth, classifying experts into those who have expertise in subjects for which expertise exists, such as science and medicine, and reserves the phrase “empty suits” for those who claim expertise in things for which no expertise can possibly exist, such as the forecasting the value of a stock exchange index tomorrow morning.

This raises an issue to be addressed by an agent model that uses either imitation or reference to experts: How do those experts make a decision which can be imitated or on which expert opinion can be founded? One might suggest that the “imitand” or the expert use a rational choice method, for example. Or there could be hierarchies of imitators or experts, each imitating others while providing expertise to other agents. It would be quite interesting to explore how such networks might work with a simple agent model. In particular, emergent properties of agent-based simulations which contain such social mechanisms could be most curious. Finally, note that the required stochastic property of the choice process of an agent using imitation or experts could be inherited from the stochastic property of the imitated or expert agents.

A third kind of social network choice protocol is voting. An individual can make a choice by polling a set of other individuals to see what they would choose, and then determine his or her choice by tallying the results. For a simple binary choice, this technique is trivial. (Again, how those who cast votes determined their respective choices is a modeling design issue.) For choice events where three or more options and three or more individuals are polled, however, Kenneth Arrow’s Impossibility Theorem enters the picture, and some form of bias has to be introduced to guarantee an outcome [45]. Again, implementation of this type of social network decision protocol is straightforward in the design of an agent-based model. The agent in question maintains a list of voters and polls each one by supplying the voter with the choice problem and accepting that voter’s choice as the vote. The agent then tallies the votes and determines its choice.

A particularly strong form of social choice is not extensively discussed in the literature, at least as far as he research of this author has been able to find. The concept has been termed by associates as consilvocation.7 In this situation, an agent turns over to another agent complete control over the choice to be made. A trivial example is the husband leaving to the wife the choice of restaurant for dinner. Consilvocation happens all the time in a democratic political context. The individual citizen elects a representative to sit in a legislative body and make decisions on his behalf. The citizen has thus turned over the choice function to the representative.

Consilvocation is a way of eliminating choice events that are out of an individual’s control but will have an impact on the course of a compound narrative. It simplifies matters considerably. Another attribute of the consilvocation choice protocol is that the consilvocated right to make the decision can be revoked. The process by which revocation occurs can be simple (the husband elects to pick the restaurant himself) or complex (the citizen must wait for the next election or invoke a recall process). Adding consilvocation to an agent model is generally not difficult but could well have a dramatic impact on the number of individuals demonstrating a specific behavior in relatively a large-scale agent model simulation. Recall that in the AirMarkets Simulator, a single agent actually represents multiple passengers, and one agent buys tickets for everyone in the group it represents.

Advertisement

8. Bias in choice

Finally, the term bias in the context of this analysis refers to the difference between the choice that is, actually made and the choice that would be made if the “correct” alternative were selected. Obviously, this calls for a definition of “correct.” In a formal statistical decision problem, bias refers to the difference between the expectation of a particular statistic that is, used to estimate the parameter and the value of the estimate itself. That is, a statistic S used to estimate a parameter θ is unbiased if E[S] = θ, and the search for unbiased estimators is a long-standing topic of statistical research. The definition of correct is not so easily determined in the agent choice circumstance.

Many authors identify and describe choice bias in a manner similar to the statistical definition, holding that the choice that results from the engagement of a rational decision protocol is the correct one, and other heuristic or social network protocols that lead to different choices for the same narrative event are biased. As has been said before, if human beings are to be validly represented by agent-based models, then they must be represented as they are, not how someone thinks they ought to be.8

Some biases are perceptual in nature, and arise from inaccurate representations of reality, which in terms of the formal definition of agent can be accommodated with properties of the perceptor component. Festinger et al. [46] and Tumminia [47] explore some of the implication of cognitive dissonance, which occurs when an individual believes in a reality that is, directly contradicted by the sensory evidence before him. The mistakes-were-made assertion described by Shermer [8], pp. 67–71 is an example of self-justification bias [48]. Inattentional blindness is the failure to recognize some feature of the surrounding environment because attention is focused on some other environmental feature [49]. Blind spot bias is the ability to see biased perception on the part of others but fail to see it in oneself. It is similar to better-than-average bias, which causes a person to think they are more capable at any given skill or talent than the average individual [50]. Humans also tend to see themselves in a more positive light than they see others [51], and therefore create a self-serving bias. People tend to accept credit when they behave in socially acceptable ways and blame circumstances outside themselves when they do not do so well. This is an example of an attribution bias [52].

But not all biases are perceptual. Two, in particular, are based on misunderstanding fundamental concepts in probability theory. Kunda ([12], pp. 54–62) notes that probability theory, as a formal mathematical discipline, dates only back 300 years,9 and that its relatively late development speaks to its intuitive difficulty. One is the base rate bias, described nicely by Kunda, and the other is the so-called Let us Make A Deal fallacy, described by Shermer ([8] pp. 83–84). Base rate bias is very common. It stems from misunderstanding the incidence of some particular characteristic in the underlying population, and thus from a miss-application of Bayes’ Rule to an intuitive inference. For example, consider John, who is a small gentleman with a quiet demeanor, who wears glasses, dresses conservatively, and often is seen carrying a book. Is John a factory worker or a librarian? Many would say a librarian. But the likelihood that he is a factory worker is far higher than he is a librarian. None of the distinguishing criteria disqualifies him from being a factory worker, and there are far more factory workers than librarians. This is the base rate fallacy; failing to account for the actual rate of incidence of a factor in making a judgment. It is a primary reason for bias in the representative heuristic.

The Let us-Make-a-Deal fallacy is significantly more subtle. The name refers to a game show popular on American daytime television. A contestant stands before three closed doors, behind one of which is a valuable prize, usually a car. Behind the other two are valueless prizes, historically goats (but animal rights advocates objected, so some other worthless offering is now used). Which door hides which prize is unknown to the contestant. The contestant chooses one of the doors and is awarded the prize behind that door, hopefully the car. But before the chosen door is opened, the host opens one of the other two doors (always revealing a goat) and asks the contestant if she wants to change her door choice and take the prize behind the remaining door. Should the contestant take the offer, or should she stay with her original selection? Most people will say it does not matter. Assuming that the likelihood of a car being behind any of the three doors is the same (one third) for each, then knowing that it is not behind one of them only means that the probability of it being behind either of the remaining two is now changed to one half, and therefore switching does not affect the odds of winning. But that is, incorrect. In fact, the probability that the car is behind the door that was chosen originally by the contestant is one third, and that it is behind the unrevealed other door is two thirds.10 The explanation is clear (but for many, not convincing). The contestant faces three possibilities: the doors can hide (1) car, goat, goat; (2) goat, car, goat; or (3) goat, goat, car. Suppose she starts the game by selecting door number one. Then if she switches and the first possibility is the true situation, she loses. But if either of the other two possibilities are the case and she switches, she wins. Thus, the probability of winning by switching is two thirds. A simple computer application is easy to write that simulates the game any desired number of times. Execution of that simulation verifies the correctness of the analysis.

There are, of course, many similar examples of incorrect reasoning. That they exist and should be avoided in the making of careful decisions is obvious. But it is equally obvious that these “errors” can be subtle, and difficult to detect. Once again, human agents in computer simulations must be modeled as they are, not as they should be. That means determining when bias is an important part of the agent behavior and building that bias into the agent. But as difficult as the task might seem initially, it is ameliorated by the knowledge that choice protocols span cultures and societies, so what is learned on one context can be applied in others, and, with the connection of the choice protocol to the narrative framework, complex behaviors can be built up out of simpler, atomic elements.

Advertisement

9. Conclusions

This presentation sets the stage for more exhaustive incorporation of narrative structures into human behavior-based computer artificial intelligence applications. One such application has been in operation for the last several years in the AirMarkets Corporation, consisting of an agent-based model of air passenger behavior, including flight schedule development, revenue management of air fares as a function of advanced booking time, departure and arrival times, group size and available service. Because of the interdependency of air fares, available service and air demand, for each run the AirMarkets Simulation replicates the air travel utilization for every seat on every scheduled flight taking place in the world for a week time period. That is, more than 42,000,000 passengers buying tickets in approximately 288,000 directional city-pair markets over a single week time period, with advanced booking being as much as 120 days in advance of departure. The narrative structure supporting this agent-based simulation is not complex. Since travel is usually a utility associated with some other activity, a choice function based on the utility value of departure/arrival times, fares, booking times and travel purpose is sufficient for the AirMarkets Simulation. Other behavior activities, however, will require more in-depth structuring.

Beyond the semi-rational construction of narratives using hard logic and mathematics, the description of human narrative decision-making gets much softer and more obscure. The heuristic thinking by individuals is perhaps rational, perhaps not. It depends on how accurate the time-dependent, context-constricted thinking of the decision-maker turns out to be. In a logically-looser setting, social structure can become the basis for narrative behavior and the guidelines for assessing the validity of the option choices become even less rigorous. Finally, there is a substantial level of human narrative activity that can only be classified as bias. There is no logic, no bounded rationality and no social context which explains such narrative behavior. In this area several frivolous, and several dangerous, actions by human narrative holders are justified.

However, there is one pressing issue among the several that exist must be addressed. It is necessary to explore the impacts of at least four mathematical anomalies on the structure of even a simple, atomic narrative. These are: (1) the Stone-Weierstrass Theorem, which stipulates the minimum mathematical structure for a data set contents to be represented by a polynomial (which in turn can become part of a narrative, but might not be consistent across several data sets); (2) the Arrow Impossibility Theorem, which shows the theoretical limits of rational decision-making on electoral processes; (3) Godel’s Theorem, which determines that any logical structure is subject to questions about completeness that can only be addressed using a logical structure more in question than the one being assessed; and (4) the Heisenberg Uncertainty Principle, which cites limits on observable, non-probabilistic statements of the physical universe. The exploration of these issues are the subject of my current research.

References

  1. 1. Parker R. The construction of agent simulations of human behavior. In: Karwowski W, Abram T, editors. Intelligent Human Systems Integration, Proceedings of the 2nd IHSI Conference: Integrating People and Intelligent Systems 903. Springer; 2019. pp. 435-441
  2. 2. Parker R. The AirMarkets simulation program. In: Everett WA, editor. PNW AIAA Conference; 2014
  3. 3. Parker R. Virtual markets: The application of agent-based modeling to marketing science [dissertation]. Sydney, Australia: University of Technology; 2010
  4. 4. Montague R. Your Brain is (Almost) Perfect. New York, NY: Plume; 2006. pp. 69-72
  5. 5. Danto A. Narrative and Knowledge. New York, NY: Columbia University Press; 1985
  6. 6. Fisher W. Human Communication as Narrative: Toward a Philosophy of Reason, Value and Action. Columbia, SC: University of South Carolina Press; 1987
  7. 7. Sober E, Wilson D. Unto Others: The Evolution and Psychology of Unselfish Behavior. Cambridge, MA: Harvard University Press; 1998
  8. 8. Shermer M. The Mind of the Market. New York, NY: Henry Holt & Co.; 2008
  9. 9. Popper K. The Logic of Scientific Discovery. London, UK: Routledge; 2002. pp. 79-82
  10. 10. Parker R, Bakken D. Predicting the unpredictable: Agent-based models for market research. In: Mouncey P, Wimmer F, editors. Best Practice in Market Research. New York, NY: John Wiley and Sons; 2007
  11. 11. Parker R, Perroud D. Exploring markets with agent-based computer simulations. In: 60th Annual ESOMAR Congress; Montreal Canada. 2008
  12. 12. Kunda Z. Social Cognition. Cambridge, MA: MIT Press; 1999
  13. 13. Gilbert N, Troitzsch K. Simulation for the Social Scientist. Milton Keynes, UK: Open University Press; 1999
  14. 14. Axelrod R. The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration. Princeton, NJ: Princeton University Press; 1997
  15. 15. Wooldridge M. An Introduction to MultiAgent Systems. New York, NY: John Wiley and Sons; 2002
  16. 16. Jackson D. Fourier Analysis and Orthogonal Polynomials. Oberlin OH; Casus; Mathematical Monographs; Mathematical Association of America; 1941
  17. 17. Grenander U, Rosenblatt M. Statistical Analysis of Stationary Time Series. New York, NY: John Wiley and Sons; 1952
  18. 18. Ben-Akiva M, Lerman S. Discrete Choice Theory. Cambridge, MA: MIT Press; 1985
  19. 19. Louviere J, Hensher P, Swait J. Stated Choice Methods. Cambridge, UK: Cambridge University Press; 2000
  20. 20. Cox D, Oakes D. Analysis of Survival Data. London, UK: Chapman and Hall; 1984
  21. 21. Hewitt E, Stromberg K. Real and Abstract Analysis. Heidelberg, DE: Springer-Verlag; 1965. pp. 94-98
  22. 22. Auffhammer M, Carson R. Forecasting the path of China’s CO2 emissions using province-level information. Journal of Environmental Economics and Management. 2008;55(3):229-247. DOI: 10.1016/j.jeem.2007.10.002
  23. 23. Parker R, Cenesizoglu T, Carson R. Aggregation Issues in Forecasting Aggregate Demand: An Application to U.S. Commercial Air Travel, Advanced Research Techniques Forum. Couer d’Alene ID: American Marketing Association; 2005
  24. 24. Chang C, Keisler H. Model Theory. Amsterdam, NL, North Holland: Elsevier; 1973
  25. 25. Humphreys P. Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Cambridge, UK: Oxford University Press; 2004. pp. 108-109
  26. 26. UIUC. Guide to the Human Relations Area Files. University of Illinois at Urbana-Champaign; 2009
  27. 27. Brown D. Human Universals. New York, NY: McGraw-Hill; 1991
  28. 28. Epstein J. Generative Social Science: Studies in Agent-Based Computational Modeling. Princeton, NJ: Princeton University Press; 2006
  29. 29. Ferguson T. Mathematical Statistics: A Decision Theoretic Approach. San Diego, CA: Academic Press; 1967
  30. 30. Gigerenzer G, Selten R. Bounded Rationality: The Adaptive Toolbox. Cambridge, MA: MIT Press; 2001
  31. 31. Gigerenzer G. Adaptive Thinking: Rationality in the Real World. Cambridge, UK: Oxford University Press; 2000
  32. 32. Gladwell M. Blink. Boston, MA: Little Brown and Company; 2005
  33. 33. Todd P. Fast and frugal heuristics for environmentally bounded minds. In: Gigerenzer G, Selten R, editors. Bounded Rationality: The Adaptive Toolbox. Cambridge, MA: MIT Press; 2001
  34. 34. Tversky A. Elimination by aspects: A theory of choice. Psychology Review. 1972;79:281-299
  35. 35. Dawes R. The robust beauty of improper linear methods in decision making. American Psychology. 1979;34:571-582
  36. 36. Clark A. Being There: Putting Brain, Body and World Together Again. Cambridge, MA: MIT Press; 1997
  37. 37. Sternberg R, Ben-Zeev T. Complex Cognition: The Psychology of Human Thought. Cambridge, UK: Oxford University Press; 2001
  38. 38. Erdos P, Renyi A. On the evolution of random graphs. In: Newman M, Barabasi A, Watts D, editors. The Structure and Dynamics of Networks. Princeton, NJ: Princeton University Press; 1960
  39. 39. Barabasi A. Linked: The New Science of Networks. New York, NY: Perseus; 2002
  40. 40. Buchanan M. Small Worlds and the Groundbreaking Theory of Networks. New York, NY: W.W. Norton and Co; 2002
  41. 41. Dorogovtsev S, Mendes J. Evolution of Networks: From Biological Nets to the Internet and WWW. Cambridge, UK: Oxford University Press; 2003
  42. 42. Newman M, Barabasi A-L, Watts D. The Structure and Dynamics of Networks. Princeton, NJ: Princeton University Press; 2008
  43. 43. Epstein J, Axtell R. Growing Artificial Societies: Social Sciences from the Bottom Up. Cambridge, MA: The MIT Press; 1996. pp. 130-135
  44. 44. Taleb N. The Black Swan. London, UK: Random House; 2007
  45. 45. Arrow K. Social Choice and Individual Values. 2nd ed. New Haven, CT: Yale University Press; 1963
  46. 46. Festinger L, Rieckin H, Schachter S. Where Prophecy Fails: A Social and Psychological Study. New York, NY: HarperCollins; 1964
  47. 47. Tumminia D. When Prophecy Never Fails: Myth and Reality in a Flying-Saucer Group. Cambridge, UK: Oxford University Press; 2006
  48. 48. Tarvis C, Aronson E. Mistakes Were Made (but Not by Me). San Diego, CA: Harcourt; 2007
  49. 49. Simons D, Chabris C. Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception. 1999;28:1059-1074
  50. 50. Pronin E, Gilovich T, Ross L. Objectivity in the eye of the beholder: Divergent perceptions of bias in self versus others. Psychology Review. 2004;111:781-799
  51. 51. Kruger J. Personal beliefs and cultural stereotypes about racial characteristics. Journal of Personality and Social Psychology. 1996;71:536-548
  52. 52. Nesbitt R, Ross L. Human Inference, Strategies and Shortcomings of Social Judgment. Upper Saddle River, NJ: Prentice-Hall; 1980

Notes

  • He uses time as the dimension carrying the dynamics, primarily because of its unidirectionality. Other dimensions, such as spatial coordinates, could also be used, but they do not necessarily have this unidirectional property.
  • But not all models are narratives. There is no need for a time dimension in every case.
  • Cultural relativism per se became prominent in post-World War II sociological research largely in an attempt to refute the naïve “survival of the fittest” philosophy exemplified by the German Nazi era.
  • Specifically, by pointing out that adolescents in Samoa indeed led stressful lives, just like teenagers everywhere else in the world.
  • Ferguson goes on to justify the application of Bayes’ theory to statistics with his treatise, arguing that this game-theoretic perspective created a persuasive demonstration of the superiority of Bayesian statistical analysis. The text was written amid the roiling Bayes versus frequentist debate within the statistics community prominent in the latter half of the twntieth century, and which chugs along today with a steady, if tedious, background din.
  • It is easy to create a simple example where Bayesian analysis generates formulations which are not only mentally intractable, but mathematically intractable as well. Consider virtually any non-trivial situation where there is no conjugate prior.
  • This term was coined by a colleague who graduated from Oxford and has a sharp interest in the English language. He was assisted—although he did not know it—by a friend whose command of English is also admirable, and American. The American refined the Oxford graduate’s original construction, which was virtually unpronounceable.
  • However, there are potentially some interesting agent-based models to be built that explore the effects of the biased choice protocol versus the rational on the outcome of a narrative.
  • She’s wrong, in one respect. Formal probability theory, where probability is defined in terms of normed measure spaces, dates back only 85 years or so.
  • This puzzle was first presented by the columnist Marilyn vos Savant in a U.S. national Sunday newspaper supplement a few years ago. When she explained how the probability of winning went to 2/3 if the other door were selected, a firestorm of protest erupted from the academic statistics world. She is right, however.

Written By

Roger Parker

Submitted: 18 April 2019 Reviewed: 06 May 2019 Published: 07 June 2019