Link to this chapter Copy to clipboard
Cite this chapter Copy to clipboard
Embed this chapter on your site Copy to clipboard
Embed this code snippet in the HTML of your website to show this chapter
Open access peer-reviewed chapter
By Dariusz G. Mikulski
Submitted: December 7th 2011Reviewed: October 1st 2012Published: March 27th 2013
In certain multi-agent systems, the interactions between agents result in the formation of relationships, which can be leveraged for cooperative or collaborative activities. These relationships generally constrain individual-agent actions, since relationships imply that at least one contract (or mutual agreement) between the agents must exist. There is always some uncertainty as to whether or not either agent can or will satisfy some contract requirement – especially at the creation of a new contract. But in order to maintain the existence of a contract, each agent must overcome this uncertainty and assume that the other will do the same. The mechanism that facilitates this “act of faith” is generally regarded as “trust.” In essence, each agent (whether a person or organization) in a relationship mutually trusts that the loss of some control will result in cooperative gains that neither agent could achieve alone.
In general, trust helps agents deal with uncertainty by reducing the complexity of expectations in arbitrary situations involving risk, vulnerability, or interdependence. This is because agents rely on trust whenever they need to gauge something they cannot ever know precisely with reasonable time or effort. The benefits of trustworthy relationships include lower defensive monitoring of others, improved cooperation, improved information sharing, and lower levels of conflict . But the reliance on trust also exposes people to vulnerabilities associated with betrayal, since the motivation for trust – the need to believe that things will behave consistently – exposes individuals to potentially undesirable outcomes. Thus, trust is a concept that must not only be managed, but also justified .
Since agents in an arbitrary system are always assumed to have selfish interests, the goal of each agent is to try to find the most fruitful relationships in a pool of potential agents . That said, we cannot assume that agents do not already have pre-existing relationships with other agents. Furthermore, some agents may actually be within strongly-connected sub-system groups known as coalitions, where every agent in the coalition has a relationship with every other agent in the coalition. A coalition may contain a mixture of trustworthy and untrustworthy agents – but as a group, achieves cooperative gains that no sub-coalition could match. Thus, agents may be justified in forming relationships with coalition members who are not ideally trustworthy in order to acquire these cooperative gains as well.
As a simple example to illustrate this concept, consider two geographically-separated agents (who never physically met) who would like to conduct a financial transaction in exchange for some good. One agent must provide the good (through the mail) and the other must provide the payment (through the mail or electronically). If both agents follow their economic best interest, then neither agent should participate in the transaction since both agents are vulnerable to betrayal. This is because neither agent can truly verify the intent of the other agent before they act. Thus, if a transaction takes place, it can be entirely attributed to trust since both agents needed to overcome the uncertainty associated with the transaction. Let us suppose, however, that the value of the good and the size of the payment are sufficiently high such that no amount of mutual trust allows a direct transaction to take place. To handle this situation, both agents could form a coalition with a mutually trusted third party, such as an escrow agent. The escrow agent would receive the payment from one agent to verify that the good can be shipped, and then later disperse the payment to the other agent (minus the escrow fee) when the good has been verified as received. Here, each agent benefits from the cooperative gains of the transaction. These gains would not be possible if even one agent chose to disband from the coalition.
This chapter intends to show how one could mathematically describe these types of trust-based interactions via the cooperative trust game to predict coalition formation and disbanding. It presents a rigorous treatment of coalition formation using cooperative game theory as the underlying mathematical framework. It is important to highlight that cooperative game theory is significantly different than the more widely recognized competitive (non-cooperative) game theory. Cooperative game theory focuses on what groups of self-interested agents can achieve. It is not concerned with how agents make choices or coordinate in coalitions, and does not assume that agents will always agree to follow arbitrary instructions. Rather, cooperative game theory defines games that tell how well a coalition can do for itself. And while the coalition is the basic modeling unit for coalition game, the theory supports modeling individual agent preferences without concern for their possible actions. As such, it is an ideal framework for modeling trust-based coalition formation since it can show how each agent’s trust preferences can influence a group’s ability to reason about trustworthiness. We refer the reader to  for an excellent primer on cooperative game theory.
This section characterizes different classes of trust games within the context of cooperative game theory. Our characterizations provide the necessary conditions for a coalition trust game to be classified into a particular class. We start with additive and constant-sum trust games, which have limited value for cooperative applications, but are included for completeness. Then, we discuss superadditive and convex trust games, which show conditions for agents to form a grand coalition. In general, grand coalition solution concepts presented here can also be applied to smaller coalitions within a trust game through the use of a trust subgame.
Let be a coalitional trust game with transferable utility where:
The transferable utility assumption means that payoffs in a coalition may be freely distributed among its members. With regards to payoff value of trust between agents, this assumption can be interpreted as a universal means for agents to mutually share the value of their trustworthy relationships. Trust cultivation often requires reciprocity between two agents as a necessary behavior to develop trust, and a transferable utility is a convenient way to model the exchange for this notion.
In defining a transferable payoff value of trust, one aspect to consider are the “goods of trust”. These refer to opportunities for cooperative activity, knowledge, and autonomy. In this chapter, we refer to these goods as trust synergy , which is a trust-based result that could not be obtained independently by two or more agents. We may also interpret trust synergy as the value obtained by agents in a coalition as a result of being able to work together due to their attitudes of trust for each other. In defining a set function for trust synergy, it is important to explicitly show how each agent’s attitude of trustworthiness for every other agent in a coalition affects this synergy. In general, higher levels of trust in a coalition should produce higher levels of synergy.
The payoff value of trust, however, also includes an opposing force in the form of vulnerability exposure, which we refer to as trust liability . Trusting involves being optimistic that the trustee will do something for the truster; and this optimism is what causes the vulnerability, since it restricts the inferences a truster makes about the likely actions of the trustee. However, the refusal to be vulnerable tends to undermine trust since it does not allow others to prove their own trustworthiness, stifling growth in trust synergy. Thus, we see that agents in trust-based relationships with other agents must be aware of the balance between the values of the trust synergy and trust liability in addition to their relative magnitudes.
Let the characteristic payoff function of a trust game be the difference between the trust synergy and trust liability of a coalition .
Additive games are considered inessential games in cooperative game theory since the value of the union of two disjoint coalitions () is equivalent to the sum of the values of each coalition.
We see that the total value of the trust relationships between any two disjoint coalitions must always be zero. In other words, the trust synergy between any two disjoint coalitions must always result in a value that is equal to their trust liability. Thus, by expanding this definition for trust games and rearranging the terms, we can characterize an additive trust game as:
In constant-sum games, the sum of all coalition values in remains the same, regardless of any outcome.
By expanding this definition for trust games and rearranging the terms, we can see that the constant-sum trust game is a special case of a two-coalition additive trust game involving every agent in the game.
Definition: An agent is a dummy agent if the amount the agent contributes to any coalition is exactly the amount that it is able to achieve alone.
Theorem: is a constant-sum trust game implies that is a zero-sum trust game.
Proof: If is a constant-sum game, the following constraint for singleton coalitions must always hold:
By rearranging the terms, combining, and substituting, we get:
The last equation implies that every agent in must behave like a dummy agent if is a constant-sum trust game. Since all agents behave like dummy agents and for all , then any coalition that forms in will have no value. Hence, the value of the grand coalition is zero (i.e. ). Therefore, the only possible constant-sum trust game is the zero-sum trust game. This completes the proof.
Corollary: is a zero-sum trust game if .
Proof: If , then . Thus, . This result in implies that every possible coalition in must behave like a coalition of dummy agents in a constant-sum trust game and their combinations with other coalitions will yield no value. Hence, the value of the grand coalition is always zero (i.e. ). This completes the proof.
Our proofs show that any constant-sum trust game is necessarily a zero-sum trust game that represents a special case of an additive trust game. These facts reinforce a notion that a group of agents who do not trust each other will always prefer to work as singleton coalitions. And even if there is some mutual trust between agents, gains from trust synergy are always lost to the trust liability, making it irrational to form any coalition with any other agent. Thus, if one determines that is a constant-sum trust game, then this provides immediate justification for using non-cooperative game theory as the basis for modeling the purely competitive agents.
In a superadditive game, the value of the union of two disjoint coalitions () is never less than the sum of the values of each coalition.
This implies a monotonic increase in the value of any coalition as the coalition gets larger.
This property of superadditivity tells us that the new links that are established between the agents in the two disjoint coalitions are the sources of the monotonic increases. This results in a snowball effect that causes all agents in the game to form the grand coalition (a coalition containing all agents in the game) since the total value of the new trust relationships between any two disjoint coalitions must always be positive semi-definite. In other words, the trust synergy between any two disjoint coalitions must always result in a value that is at least as large as their combined individual trust liabilities. Thus, by expanding the definition for trust games and rearranging the terms, we can characterize a superadditive trust game as:
A game is convex if it is supermodular, and this trivially implies superadditivity (when ). Thus, we see that convexity is a stronger condition than superadditivity since the restriction that two coalitions must be disjoint no longer applies.
In convex games, the incentives of joining a coalition grow as the coalition gets larger. This means that the marginal contribution of each agent is non-decreasing.whenever
Definition: A subgame , where is not empty, is defined as for each . In general, solution concepts that apply to a grand coalition can also apply to smaller coalitions in terms of a subgame.
Definition: Given a game and a coalition, the R-marginal game is defined by for each .
Using these definitions, Branzei, Dimitrov, and Tijs proved that a game is convex if and only all of its marginal games are superadditive . We provide their proof here as a means for the reader to readily justify this assertion.
A game is convex if and only if for each the R-marginal game ) is superadditive.
Proof: Suppose is convex. Let and . Then:
where the inequality follows from the convexity of . Hence, is convex (and superadditive as well).
Now, let and . Suppose that for each , the game ) is superadditive. If , then the game and ; hence, is superadditive. If , then because ) is superadditive:
This completes the proof.
By using this characterization in the previous theorem and expanding it to our definition of a trust game, we can state a necessary requirement to produce a convex trust game: that the marginal trust synergy between any two coalitions must always result in a value that is at least as large as their marginal trust liability.
In the previous section, we characterized different classes of trust games without explicitly defining a trust game model. In this section, we provide a general model for trust games that conforms to the theoretical constructions in the previous section and can be adapted to a wide variety of applications.
The attitude of trustworthiness agents have toward other agents in a trust game is managed in an matrix .
This matrix is populated with values that represent the probability that agent is trustworthy from the perspective of agent . The values can also be interpreted as the probabilities that agent will allow agent to interact with him, since rational agents would prefer to interact with more trustworthy agents.
The manner in which is evaluated depends on an underlying trust model. We make no assumption about the use of a particular trust model, as the choice of an appropriate model may be application-specific. We also make no assumption about the spatial distribution of the agents in a game – therefore, this matrix should not necessarily imply the structure of a communications graph.
We provide a general model for trust synergy and trust liability that can be adapted for a variety of applications. Our model makes use of a symmetric matrix to manage potential trust synergy and a matrix to manage potential trust liability. is symmetric because we assume that agents mutually agree on the benefits of a synergetic interaction.
As with the matrix, we make no assumptions about how and are calculated, since the meaning of their values may depend on the application. For example, the calculations for and between two agents may not only take into account each agent’s individual intrinsic attributes – it may also factor in externalities (i.e. political climate, weather conditions, pre-existing conditions, etc.) that neither agent has direct control over.
Definition: The total value of the trust synergy in a coalition is defined as the following set function:
Trust synergy is the value obtained by agents in a coalition as a result of being able to work together due to their attitudes of trust for each other. The set function assumes that the events “agent allows agent to interact” and “agent allows agent to interact” are independent. This is reasonable since agents are assumed to behave as independent entities within a trust game (i.e. no agent is controlled by any other agent). Therefore, we treat the product as the relative strength of a trust-based synergetic interaction, which justifies the use of the summation. The value for serves as a weight for a trust-based synergetic interaction.
Definition: The total value of the trust liability in a coalition is defined as the following set function:
Trust liability can be thought of as the vulnerability that agents in a coalition expose themselves to due to their attitudes of trust for each other. We treat the product as a measure for agent ’s exposure to unfavorable trust-based interactions from agent . A high amount of trust can expose agents to high levels of vulnerability. But each agent can regulate its exposure to trust liability by adjusting . Changes to , however, also influence the benefits of trust synergy.
We define the trust game (also known as the total value of the trust payoff in a coalition) as the difference between its trust synergy and trust liability.(21)
The factorization shows us that the first factor () will always be greater than or equal to zero while the second factor can be either positive or negative. Hence, by isolating the second factor and recognizing that trust values equal to 1 produce the smallest possible reduction in the second factor, we can state the condition that guarantees the potential for two agents to form a trust-based pair coalition.
Proposition 1: Any two agents will never form a trust-based pair coalition if . Otherwise, the potential exists for agent and to form a trust-based pair coalition.
Proposition 2: If two agents can never form a trust-based pair coalition, then the best strategy for both agents is to never trust each other (i.e. ).
In general, proposition 1 does not extend to trust-based coalitions larger than two due to the complex coupling of trust dynamics between different agents as coalitions grow larger. For example, two agents who may produce a negative trust payoff value as a pair may actually realize a positive trust payoff with the addition of a third agent. This situation occurs if both agents have positive trust relationships with the third agent that outweighs their own negative trust relationship. Such a situation is common in real world scenarios, and justifies the importance of various trusted third parties, such as escrow companies, website authentication services, and couples therapists.
In light of this, we can mathematically justify a condition similar to proposition 1 that is valid for coalitions of any size – but only for a special type of trust game.
Theorem: A trust-based coalition will never form if:
Proof: Let and for all . Then, by substituting into the trust model:(23)
Because is a constant that is always greater than or equal to zero, we can clearly see that the second factor affects whether or not is positive or negative. Hence, if the second term in the second factor is larger than the first term in the second factor, then a coalition will never form. This completes the proof.
In practice, trust is often defined relative to some context. Context allows individuals to simplify complex decision-making scenarios by focusing on more narrow perspectives of situations or others, avoiding the potential for inconvenient paradoxes.
Coalitional trust games can also be defined relative to different contexts using the multi-issue representation , where we use the words “context” and “issue” interchangeably.
Definition: A multi-issue representation is composed of a collection of coalitional games, each known as an issue, , which together constitute the coalitional game where
For each coalition ,
This approach allows us to define an arbitrarily complex trust game that can be easily decomposed into simpler trust games relative to a particular context. A set of agents in one context can overlap partially or complete with another set of agents in another context. And one can choose to treat the coalitional game in one big context, or the union of any number of contexts based on some decision criteria.
In the analysis of a trust-based coalition, it may sometimes be useful to understand the manner in which different subsets of a coalition contribute to its payoff value. One way to do this is to use a framework developed by Arney and Peterson where measures of cooperation are defined in terms of altruistic and competitive cooperation . The unifying concept in the framework is a subset team game, a situation or scenario in which the value of a given outcome (as perceived by a team subset) can be measured.
Definition: Given a game and a non-empty coalition, the subset team game associates a valued payoff perceived by the agents in when the agents in cooperate.
The authors limit the application of the framework to games where more agents in a coalition lead to more successful outcomes. Thus, adding more agents to a coalition should never reduce the coalition’s payoff value. Also, the payoff value perceived by a coalition should not be smaller than the payoff value perceived by a subset of the same coalition. We refer to these two properties as fully-cooperative and cohesive, respectively.
Definition: A subset team game is fully-cooperative if for all.
Definition: A subset team game is cohesive if for all .
The authors show that in a fully-cooperative and cohesive game, the marginal contribution of a subset team is equal to the sum of the competitive and altruistic contributions of the subset team.
Definition: Given a payoff function in a subset team game, the total marginal contribution of to a team is . If the game is both cohesive and fully-cooperative, then the competitive contribution of is and the altruistic contribution is . Note that the total marginal contribution decomposes as
In order to use these definitions within a trust game, we must first show they relate to the coalition game classes described in Section 3.
Theorem: A subset team game that is both fully-cooperative and cohesive is a convex game.
First, we prove the fully-cooperative case. If such that , then following inequalities are also true:
Since the system of inequalities shows that the contribution of an additional agent in a coalition is always non-decreasing, it is trivially true that:
Next, we prove the cohesive case. If such that , then following inequalities are also true:
Since the system of inequalities show that the contribution of an additional agent in the accessing coalition subset is always non-decreasing, it is trivially true that:
This completes the proof.
It is important to note that the additional agent for both cases is never already inside either coalition or . If it was, then the proof would be invalid, as one could easily demonstrate counter examples under cases where an agent .
Now that we have shown that a convex subset team game is fully-cooperative and cohesive, we may decompose the total marginal contribution of a set of agents into both altruistic and competitive contributions whenever a trust game is convex. To do so, we must define a value function that utilizes the trust game payoff value function .
Definition: Given a game and a non-empty coalition, the subset trust game associates a trust payoff value perceived by the agents in when the agents in cooperate:
The rationale behind this payoff function in is that the payoff has to be from the perspective of the agents in . The agents in can factor in the values related to relationships between themselves (first term) and relationships between agents in and agents in (second term). But they cannot factor in values related to relationships between the agents in , since agents in are assumed to have no direct knowledge of what is happening between the agents.
Using the payoff function , we can calculate ’s altruistic contribution and competitive contribution in a coalition .
In this section, we present an example of cooperative trust for a specific application: the convoy. Our primary purpose here is to demonstrate how one could use the theory in this chapter to model specific scenarios involving trust. We define the convoy trust game, which describes a cooperative game where the agents intend to move forward together in a single file. This type of game can be naturally adapted to the analysis of traffic patterns, leader-follower applications, hierarchical organizations, or applications with sequential dependencies. Our goal in this secction is to understand how trust-based coalitions will form under this type of scenario.
We begin with a simple convoy scenario that models a four-agent convoy, , which intends to move together in a single file. The value of each index into represents the agent’s position in the convoy. For this scenario, we interpret the trust synergy in the coalition to represent the agents in the coalition moving forward. Thus, we set the values in the trust synergy matrix equal to the number of agents that will move forward if the two agents are moving forward (inclusive of the two agents). We interpret the trust liability in coalition to represent the vulnerability of agents in the coalition to stop moving. Thus, we set the values in the trust liability matrix equal to the number of agents that can prevent a particular agent from moving forward in a agent coalition pair.
Definition: The values in and for a 4-convoy trust game are:
First, let us analyze this game as an additive trust game. While there are infinitely many solutions for that conform to the additive game, the most obvious solution is the extreme situation where no agent trusts any other agent – or, when is the identity matrix (). In this case, no agent will ever affect another agent, either positively or negatively. Thus, each agent will ultimately form a singleton coalition and fail to work cooperatively with any other agent.
Next, let us analyze another extreme situation where every agent completely trusts every other agent – or, when . As such, we can enumerate the trust payoff values for each possible coalition.;
These results provide us an interesting insight, in that all agents behind the lead agent find higher values of trust payoff with the lead agent than with the nearest agent. As such, as long as the lead agent is a member of a trust-based coalition in this game, there will be no incentive for any other agent to abandon the coalition. Thus, the agents ultimately form the grand coalition. Note, however, that the formation of a grand coalition does not imply that the trust game is superadditive or convex. This assertion is justified with the observation that .
In order to form a convex 4-convoy trust game, we must satisfy the conditions that ensure that all trust payoff values in any coalition are at least as large as any sub-coalition – or that the marginal trust synergy is always greater than or equal to the marginal trust liability. While, again, there are infinitely many solutions for that conform to convex game, the games with the highest trust payoff actually have either one of the following trust matrices (see next section for proof):
, , , and are modified versions of and all produce the same results in the trust payoff value function. The main modification ensures that agents 3 and 4 have no trust toward each other since the sum of their individual trust liabilities always outweigh the trust synergy they create. The following is the enumeration of the trust payoff values for the 4-convoy trust game with the highest trust payoff:;
The deep insight we gain from analyzing optimal trust matrices and payoff value results is that all agents behind the lead agent need only trust the lead agent in the convoy to move forward, provided the lead agent trusts every other agent to follow it. This echoes the intuition seen in Jean-Jacques Rousseau’s classic “stag hunt” game, where there is no incentive for any player to cheat by not cooperating as long as each player can trust others to do the same .
We can use anecdotal evidence found in our experiences in automobile traffic jams to verify our understanding of the theoretical result. Drivers in traffic lanes (coalitional convoys) rarely place a significant amount of trust in neighboring drivers to justify the value of the traffic lane (as the model corroborates). In fact, in the event a driver becomes stuck in a traffic jam, he likely will not feel betrayed by the driver directly in front. Instead, he will unconsciously begin gauging the coalitional value of the traffic jam by considering his level of trust in the lead driver in the traffic jam, whether in visible range or not. In most cases, the driver monitors the traffic flow or listens to traffic reports to gauge his trust for the lead driver. He may also unconsciously consider other drivers in the traffic jam and estimate their trust perceptions of the traffic jam to gauge the coalition’s value. In the event a driver cannot accurately gauge the value of the traffic jam, he may choose to leave the traffic jam and attempt to join another traffic coalition (lane) with a higher payoff value. These types of driver behaviors are generally not performed when the trust for the lead driver to move forward is high. Yet, these behaviors feel necessary when the trust lessens since they attempt to resolve coalitional and environmental uncertainties.
We conclude by generalizing the convoy trust game for any number of agents and prove the solution for the highest payoff trust-based coalition. Our proof shows that all agents behind the lead agent in a convoy need only trust the lead agent, and no other agent, to move forward so long as the lead agent trusts every other agent to follow it.
Definition: The generalized values in and for a convoy trust game with agents are:
Theorem: The convoy trust game that produces the grand coalition with highest payoff value has a trust matrix that conforms to the following construction:
Suppose we generalize the values in and . According to proposition 1, two agents will never form a trust-based coalition pair if . Thus, by substitution:
We see that if is the maximum value, then . Similarly, if is the maximum value, then . Thus, the inequalities tell us that any agent behind the second agent will never form a trust-based coalition with any other agent behind the second agent. Therefore, by proposition 2, the best strategy for these agents is to have no trust for each other; hence when for .
The equalities above alsto tell us that a trust-based coalition formation is possible with the lead agent and the second agent. Using the definition of the trust game model, the trust payoff values for a coalition in the convoy trust game is:
We may now define the trust payoff values for any pair of agents as:
Let us first analyze coalition formation with the lead agent. If , then . Therefore, the payoff value for a pair coalition between and is:
By inspection, we see that the highest trust payoff value is achieved when both the lead agent and any other agent completely trust each other (i.e., when ). However, to justify this assertion, we must also show this is true when . If , then . Therefore, the payoff value for a pair coalition between and is:
Again, by inspection, we confirm that the highest trust payoff is achieved when both the lead agent and any other agent completely trust each other. Therefore, when the for .
Now, we analyze coalition formation with the second agent. If , then . Therefore, the payoff value for a pair coalition between and is:
The highest trust payoff that can be achieved with the second agent is equal to zero, and this only occurs when both agents either have complete trust in each other (i.e., when ) or no trust in each other (i.e., when ). Any other combination of trust values will produce negative trust payoff values. However, to justify this assertion, we must also show this is true when . If , then . Therefore, the payoff value for a pair coalition between and is:
By inspection, we confirm that the highest trust payoff that can be achieved with the second agent is equal to zero. Therefore, when for .
To complete the proof, we simply state our assumption that each agent fully trusts itself, since it is impossible for an agent to diverge from a singleton coalition. Therefore, when . This completes the proof.
The author would like to acknowledge the Ground Vehicle Robotics group at the U.S. Army Tank-Automotive Research Development, and Engineering Center (TARDEC) in Warren, MI for their basic research investment, which resulted in the development of the cooperative trust game theory in this chapter. Furthermore, the author would like to thank his academic advisors, Dr. Edward Gu (Oakland University) and Dr. Frank Lewis (University of Texas in Arlington) for their insight and advice, which tremendously helped to guide the research to a successful outcome.
1699total chapter downloads
Login to your personal dashboard for more detailed statistics on your publications.Access personal reporting
Edited by Hardy Hanappi
By Antoniou Josephina, Lesta Papadopoulou Vicky, Libman Lavy and Pitsillides Andreas
Edited by Hardy Hanappi
By Hardy Hanappi
We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.More about us