Open access

Similarity-based Techniques for Trust Management

Written By

Mozhgan Tavakolifard

Published: 01 March 2010

DOI: 10.5772/8386

From the Edited Volume

Web Intelligence and Intelligent Agents

Edited by Zeeshan-Ul-Hassan Usmani

Chapter metrics overview

2,746 Chapter Downloads

View Full Metrics

1. Introduction

A network of people having established trust relations and a model for propagation of related trust scores are fundamental building blocks in many of todayŠs most successful e-commerce and recommendation systems. Many online communities are only successful if sufficient mutual trust between their members exists. Users want to know whom to trust and how much to trust in the competence and benevolence of other community members in a specific application domain.

However, the web of trust is often too sparse to predict trust values between non-familiar people with high accuracy. Since, at least in large online communities, a user has experience with only a very small fraction of the other community members. Thus, very often there will be no trust relation to an intended new partner of an e-commerce transaction at all.

The process of building trust is hereby performed in two different ways. First, one can establish trust (or distrust) by gaining direct experience with another party. Of course, every positive event increases the assumed trustworthiness of the trustee while every negative one reduces it. Second, one can gain trust based on recommendations of third parties. If, e.g., Alice has high trust in Bob’s ability to assess the trustworthiness of other people, Bob has similar trust in Claire’s recommendations, and Claire considers David trustable based on her personal experience with him, then Alice gains also trust in David even if she has no or very limited knowledge of him at all. This form of propagated trust is called trust transitivity. Trust transitivity may provide additional information to alleviate the consequences of the sparsity and possible cold-start problems. Such approaches are helpful, provided that a complete trust path exists between the two users.

Based on the two forms of trust, a so-called web of trust between community members is created which is often used in recommender systems helping users of e-commerce applications to get an idea about the trustworthiness of their mostly personally unknown cooperation partners. Unfortunately, however, these webs of trust are often too sparse to be helpful in practice since — at least in large online communities — a user has experience with only a very small fraction of the other community members. Thus, very often there will be no trust relation to an intended new partner of an e-commerce transaction at all (Kim et al., 2008).

An alternative approach to the problem is advocated in this chapter. Based on collaborative filtering one can exploit the like-mindedness resp. similarity of individuals to infer trust to yet unknown parties which increases the trust relations in the web. For instance, if one knows that with respect to a specific property, two parties are trusted alike by a large number of different trusters, one can assume that they are similar. Thus, if one has a certain degree of trust to the one party, one can safely assume a very similar trustworthiness of the other one. In an attempt to provide high quality recommendations and proper initial trust values even when no complete trust propagation path or user profile exists, we propose TILLIT1 (Trust Inference Links based on Like-minded Interaction Transitions) — a model based on combination of trust inferences and user similarity. The similarity is derived from the structure of the trust graph and users’ trust behavior as opposed to other collaborative-filtering based approaches which use ratings of items or user’s profile. We describe an algorithm realizing the approach based on a combination of trust inferences and user similarity, and validate the algorithm using a real large-scale data-set.

TILLIT enables to derive trust not only from direct experience and by transitive propagation but also from the similarity between users and vice versa. In particular, two users are considered similar if they either built akin trust relations to other users or if they are trusted very similarly by others. This can be used to propagate already known trust to new trust relations encompassing people similar to those of the yet known relationships. Thus, the web of trust can be augmented significantly.

In comparison with other approaches based on similarity, our work has the following differences:

  • It intends to alleviate the sparsity problem in the web of trust matrix itself instead of the matrix of users rating items in the system. Since users have usually few items rated in common, the classic recommender system techniques are often ineffective and are not able to compute a user similarity weight for many of the users. Instead, exploiting the web of trust, it is possible to propagate trust better and to infer additional trust information about other users.

  • It calculates the similarity from the structure of the web of trust and trust relations (the trust graph structure and trust values) instead of user-item ratings.

  • It proposes methods to convert trust values to similarity measures and vice versa based on the TNA-SL model.

We conduct experiments on a large real dataset showing how our proposed solution increases the coverage (number of trust relations that are predictable) while not reducing the accuracy (the error of predictions). This is especially true for users who have provided few ratings (Tavakolifard et al., 2009).

The rest of this chapter is organized as follows: Section 2 presents a state-of-the-art survey of most popular approaches to deal with the sparsity problem and provide main directions along which research efforts have been done. In section 3 our proposed model is presented. Finally, Section 4 concludes the chapter and outlines some future issues concerning the applicability of the proposed method.

Advertisement

2. State of the Art

Most popular approaches proposed to deal with the sparsity problem include dimensionality reduction of the user-item matrix, application of associative retrieval techniques in the bipartite graph of items and users, item-based similarity instead of user-based similarity, and content-boosted collaborative filtering (see (Papagelis et al., 2005)). The dimensionality reduction approach addresses the sparsity problem by removing unrepresentative or insignificant users or items so as to condense the user-item matrix. We briefly explain those which are based on trust management and similarity measurement and thus more closely resemble our work. Similarity-based approaches can be categorized in two groups: rating-based similarity and profile-based similarity.

Recently, several researches have suggested that the incorporation of a notion of trust into the standard collaborative filtering model can effectively solve the sparsity problem and thus provide better recommendations. A user can build his personalized web of trust by specifying those friends or users he trusts. The trust web can be constructed through the explicit trust ratings provided by users (Hwang & Chen, 2007). Table 1 indicates whether each related work is based on an explicit web of trust or the trust is derived from user-item ratings. In addition, this table shows which kind of similarity (rating-based or profile-based) is used.

PaperWeb of trustUser-item ratingsrating-based similarityprofile-based similarity
(Papagelis et al., 2005)xx
(Massa & Avesani, 2004)xx
(Massa & Bhattacharjee, 2004)xx
(Massa & Avesani, 2006)x
(Avesani et al., 2004)xx
(Avesani et al., 2005)xx
(Weng et al., 2006)x
(Lathia et al., 2008)x
(Gal-Oz et al., 2008)x
(O’Donovan & Smyth, 2005)xxx
(Ziegler & Golbeck, 2007)xx
(Ziegler & Lausen, 2004)xx
(Golbeck, 2006)xx
(Golbeck & Hendler, 2004)xx
(Golbeck, 2005)x
(Bedi & Kaur, 2006)x
(Bedi et al., 2007)x
(Hwang & Chen, 2007)x
(Peng & Seng-cho, 2009)x
(Kitisin & Neuman, 2006)x
(Fu-guo & Sheng-hua, 2007)xx
(Peng & Seng-cho, 2009)xx
(Victor, De Cock, Cornelis & Teredesai, 2008)x
(Victor, Cornelis, De Cock & Pinheiro da Silva, 2008)x

Table 1.

Categorization of the related work.

In (Papagelis et al., 2005), authors explain how similarity can benefit from special characteristics of trust such as the ability to propagate along chains of trusted users; in this way similarity can support transitivity. They develop a model to establish trust between users by exploiting the transitive nature of trust. In their model they use ordinary measures of similarity taken from collaborative filtering to form the potential trust between the users which would be propagated in a similar way to the word-of-mouth scheme through a trust graph. Finally, by transforming the value back into similarity measure terms, it could be made appropriate for use in collaborative filtering algorithms. More specifically, for each pair of users they first calculate how similar they are, applying PearsonŠs correlation coefficient formula over the user-item ratings, and then they calculate the indirect trust between them. Next, this trust value is converted to a similarity metric using their formula. However, their model simply adopts similarity as trustworthiness. Hence, it still possesses the limitations of similarity-based collaborative filtering as discussed. The main contribution of this work is that a trust metric has been designed, which helps a user to quantify the degrees of trust it should place on others.

Massa et al. present in (Massa & Avesani, 2004) evidence that, by incorporating trust, recommender systems can be more effective than systems based on traditional techniques like collaborative filtering. They analyze the potential contribution of Trust Metrics in increasing the performances of Recommender Systems and proposed an architecture for trust-aware Recommender Systems. In this paper, it is proposed that a peer can establish trust on other peers through explicit trust statements and trust propagation. A trust model is built directly from users’ direct feedbacks. This trust model is incorporated into the recommendation process for recommending various items (such as books, movie, music, software etc.) to on-line users. Users can express their personal web of trust by identifying those reviewers whose reviews and ratings are consistently found to be valuable. they argue that it is possible to predict trust in unknown users by propagating trust even there were no direct connection between them. However, it is not clear how a user quantify the degrees of trust when making trust statements. The authors show how the similarity measure, on average, is computable only against a very small portion of the user base and is, in most cases, a noisy and unreliable value because computed on few items rated in common by two users. Instead, trust-aware techniques can produce a trust score for a very high number of other users; the trust score of a user estimates the relevance of that users’ preferences. In this paper, similarity is measured using PearsonŠs correlation coefficient on user-item ratings.

They also show, in their subsequent experiment (Massa & Bhattacharjee, 2004), that the incorporation of trust metric and similarity metric can increase the coverage of recommender systems while maintaining the recommendation accuracy. This work builds a trust model directly from trust data provided by users as part of the popular epinions.com service. A big limitation of the work in (Massa & Avesani, 2004) and (Massa & Bhattacharjee, 2004) is that the web of trust are built on binary relationships among users and the propagating trusts are computed simply based on the distances between them.

Massa and Avesani in (Massa & Avesani, 2006) analyze the relative benefits of asking new users either few ratings about items or few trust statements about other users for the purpose of bootstrapping a RS ability to generate recommendations. They run experiments on a large real world dataset derived from Epinions. com. The results clearly indicate that while traditional RS algorithms exploiting ratings on items fail for new users, asking few trust statements to a new user is instead a very effective strategy able to quickly let the RS generate many accurate items recommendations. The working hypothesis is that inviting users to elicit opinions on users (trust statements) rather than opinions on items allows to shorten the bootstrapping of RSs for cold start users. The benefits can be summarized as follows: (1) the number of trust statements needed from a new user for bootstrapping a Recommender System is much less than the number of rating on items; (2) while exploiting the few early ratings provided by a new user does not enable to generate recommendations, exploiting just few early trust statements allows to significantly increase the number of possible recommendations; (3) the accuracy of generated recommendations increases as well exploiting trust statements rather than ratings on items. The main contribution of this paper is the empirical proof of our hypotheses. The straightforward impact of this work is a new guideline for Recommender Systems design: a new user has to be invited to elicit few other users she trusts rather than to express her opinions on a pool of items.

Avesain et al. in (Avesani et al., 2004; 2005) apply the trust model into the ski mountaineering domain. They present a community-based website in which users can share their opinions about the snow conditions of different ski routes and also express their trust on othersŠ opinions. The trust score of a user depends on the trust statements of other users on him/her and their trust scores. However, the trust model requires the direct feedback of users and the effectiveness of the trust model on the skiing community has not been validated.

In (Weng et al., 2006) propose that peers predict the new items’ ratings based on the recommendations of the peers that are trusted directly or indirectly. A trust metrics has been designed to help peers to determine the degrees of trust should be placed on others. The design of trust metrics also stimulates a novel method to make prediction, which is featured by the recommendation adjustment and pseudo-recommendation. It has been shown by the experimental results that the trust metrics and corresponding prediction making approach do improve the performance of traditional similarity-based collaborative filtering in terms of coverage, prediction accuracy and robustness.

A number of techniques for performing collaborative filtering from the point of view of a trust-management problem are outlined in (Lathia et al., 2008). In this work authors propose a variation of k-nearest neighbor collaborative filtering algorithm for trusted k-nearest recommenders. This algorithm allows users to learn who and how much to trust one another by evaluating the utility of the rating information they receive. They mainly address the problem of learning how much to trust rating information that is received from other users in a recommender system.

A model for computing trust-based reputation for communities of strangers is proposed in (Gal-Oz et al., 2008). The model uses the concept of knots, which are sets of members having high levels of trust in each other. Different knots typically represent different view points and preferences. The assumption underlying this knot-aware reputation model is that use of relatively small, but carefully selected, subsets of the overall community’s reputation data yields better results than those represented by the full dataset.

In (O’Donovan & Smyth, 2005), O’Donovan and Smyth argue that profile similarity is just one of a number of possible factors that might be used to influence recommendation and prediction, and the reliability of a partner profile to deliver accurate recommendations in the past is another important factor, if a profile has made lots of accurate recommendation predictions in the past it can be viewed as more trustworthy than another profile that has made many poor predictions. They claim that the reliability of a user profile to deliver accurate recommendation in the past is an important factor for influencing recommendation and prediction. A user is viewed as more trustworthy if he has made more accurate predictions in the past than other users. The trust metrics are calculated at both the Item and Profile levels. Item Level trust is a representation for a producer’s trustworthiness with respect to the recommendation of a specific item. Profile Level trust is a less fine-grained metric, representing a recommendation producers trust as a whole, without respect to one specific item. For example, we might wish to refer to John’s overall trustworthiness based on a series of different past recommendations. This score is simply an average over the Item Level trust scores for every item in the users profile. Essentially these metrics summarize the relative number of correct recommendations that a given user has made, according to a predefined error bound. They propose to modify the way that recommendation partners are generally selected or weighted during the recommendation process. They argue that profile similarity on its own may not be sufficient, that other factors might also have an important role to play. Specifically they introduce the notion of trust in reference to the degree to which one might trust a specific profile when it comes to make a specific rating prediction. They develop two different trust models, one that operates at level of the profile and one at level of the items within a profile. In both of these models trust is estimated by monitoring the accuracy of a profile at making predictions over an extended period of time. Trust then is the percentage of correct predictions that a profile has made in general (profile-level trust) or with respect to a particular item (item-level trust). They describe how this trust information can be incorporated into the recommendation process and demonstrate that it has a positive impact on recommendation quality. However, this system only uses a global trust metric and provides neither any personalization nor trust propagation. Ziegler and Lausen in (Ziegler & Lausen, 2004) mention that in order to provide meaningful results for recommender system applications, they expect notions of trust to clearly reflect user similarity. In this work, they provide empirical results obtained from one real, operational community and verify latter hypothesis for the domain of book recommendations. Ziegler and Golbeck in (Ziegler & Golbeck, 2007) experimentally prove that there exists a significant correlation between the trust expressed by the users and their profile similarity based on the recommendations they made in the system. This correlation is further studied as survey-based experiments in (Golbeck, 2006).

Golbeck et al. in (Golbeck & Hendler, 2004) describe an E mail filtering system based on trust ratings. The predicted trust of a user is given by a weighted average of her neighborsŠ trust ratings. They have shown that the weighted average metric can provide better results than other metrics.

Golbeck in (Golbeck, 2005) present FilmTrust, a website that uses trust in Semantic Web-based social networks, to create predictive movie recommendations. She show how these recommendations are more accurate than other techniques in certain cases, and discuss this as a mechanism of Semantic Web interaction. Within the FilmTrust website, trust in social networks has been used to personalize the user experience. Trust took on the role of a recommender system forming the core of an algorithm to create predictive rating recommendations for movies. The accuracy of the trust-based predicted ratings in this system is significantly better than the accuracy of a simple average of the ratings assigned to a movie and also the recommended ratings from a Person-correlation based recommender system.

In (Bedi & Kaur, 2006) a model that incorporates the social recommendation process is proposed. The trustworthy peers of the user become the recommender agents and suggest movies to the user according to the tastes of the user. The agents in our system also learn from their experience in dealing with the trustworthy peers and update the degree of trust on them. In the proposed system, they have tried to merge the advantages of the mechanical recommender system with the more humane recommendation process to make their recommendations trustworthy and useful for the user.

(Bedi et al., 2007) proposes the design of a recommender system that uses knowledge stored in the form of ontologies. The interactions amongst the peer agents for generating recommendations are based on the trust network that exists between them. Recommendations about a product given by peer agents are in the form of Intuitionistic Fuzzy Sets specified using degree of membership, non membership and uncertainty. The presented design uses ontologies, a knowledge representation technique, instead of databases for creating annotated content for Semantic Web. Seeing the potential and popularity of ontologies among researchers, they believe that ontologies will be build and maintained in numerous knowledge domains for the Semantic Web and future applications. The presented recommender system uses temporal ontologies that absorb the effect of changes in the ontologies due to the dynamic nature of domains, in addition to the benefits of ontologies. A case study of tourism recommender system is chosen to generate the recommendations for the selection of destination, travel agents and the flight schedule. A comparison of the generated recommendations with the manual recommendations by peers establishes the validity of the presented recommender system.

In (Hwang & Chen, 2007) an improved mechanism to the standard collaborative filtering techniques by incorporating trust into collaborative filtering recommendation process is presented. They derive the trust score directly from the user rating data based on users’ prediction accuracy in the past and exploit the trust propagation in the trust web. They investigate the effects of both the local trust metric and the global trust metric in the standard collaborative filtering recommendation. The global metric has shown to have an advantage over other approaches in prediction coverage. The local metrics provide more accurate recommendations than those provided by standard collaborative filtering technique. The overall performance of their trust-based recommender system is presented and favorably compared to other approaches. Experimental results verify that the incorporation of trust into collaborative filtering process can indeed improve the prediction accuracy while maintain satisfactory prediction coverage.

(Kitisin & Neuman, 2006) propose an approach to include the social factors e.g. user’s past behaviors and reputation together as an element of trust that can be incorporated into the current recommender system framework and show their experiments in order to test their solution. Two computation models: expertise level and credit model are presented. The expertise model includes trust, reputation and past behaviors whereas the credit model represents the recommendation incentive. They propose using credits to help alleviate cold start problem and data sparseness.

In (Fu-guo & Sheng-hua, 2007) authors argue that items belonging to different topics need different trustworthy users to make recommendation, so topic-level trust will be more effective than profile-level trust in incorporating into the recommendation process. Based on this idea, they design a topic-level trust model which helps a user to quantify the trustworthy degree on a specific topic, and propose a new recommender algorithm by incorporating the new model into the mechanics of a standard collaborative filtering recommender system. Their proposed algorithm combines topic trust with profile similarity. The results from experiments based on Movielens dataset show that the new method can improve the recommendation accuracy of recommender systems.

(Peng & Seng-cho, 2009) is motivated by the need to provide recommendations about blog articles, so that bloggers/readers can find desired articles easily. Accordingly, this study proposes to exploit the trust relationships between bloggers and readers via explicit trust ratings to generate recommendations in a reliable and satisfactory way. Furthermore, rather than only using a single trust rating, this work presents a multi-faceted model that considers trust by dividing a general trust rating into multiple trust ratings for different types of blog articles, thus enabling trust relationships to be evaluated in a fine-grained manner. To help ease information overload in the blogosphere, this work proposes a trust-enhanced collaborative filtering approach that integrates multi-faceted trust based on article type and user similarity. An online blog article recommender system, called iTrustU, is also designed to evaluate the effectiveness of the proposed approach in terms of accuracy and quality of recommendations. Results demonstrate that the proposed integrated approach yields a significantly higher accuracy than traditional approaches, especially for cold-start users. In addition, analysis results indicate that trust and similarity among bloggers/readers have a significantly positive correlation in the blogosphere. Effective recommender systems can be achieved by exploiting trust relationships in a trust network. The proposed approach is applicable not only to the blogosphere, but also to online social communities when trust relationships already exist between users.

(Victor, De Cock, Cornelis & Teredesai, 2008) examines the problem of cold-start users in recommender systems and propose to connect the newcomer to an underlying trust network among the users of the recommender system which alleviates the so-called cold start problem. In this paper, they study the effect of guiding the new user through the connection process, and in particular the influence this has on the amount of generated recommendations. Experiments on a dataset from Epinions.com support the claim that it is more beneficial for a newcomer to connect to an identified key figure instead of to a random user.

In (Victor, Cornelis, De Cock & Pinheiro da Silva, 2008) the authors advocate the use of a trust model in which trust scores are (trust,distrust)-couples, drawn from a bilattice that preserves valuable trust provenance information including gradual trust, distrust, ignorance, and inconsistency. They pay particular attention to deriving trust information through a trusted third party, which becomes especially challenging when also distrust is involved.

In our work we provide an alternative approach to deal with the sparsity problem. We measure similarity based on the users’ trust relationships, i.e. trust graph structure and trust values (in contrast to the other approaches which have used user-item ratings or profile similarity), and propose novel formulas to convert it to subjective logic opinions. The consideration of these similarities leads to extra information accessible for trust inferences.

Advertisement

3. TILLIT

In our model, we measure similarity based on the existing web of trust in a community using an iterative fixed-point algorithm on node-pair graphs introduced later in this section. As a method to describe the values of trust as well as its propagation we apply the TNA-SL model (Jøsang et al., 2006) which is based on the Subjective Logic (Jøsang, 2001). Our approach, however, would also work with other methods like (Abdul-Rahman & Hailes, 2000; Grandison & Sloman, 2002).

In subsection 3.1, we briefly explain the TNA-SL model as the background of our work. Our proposed model for trust inference is described in section 3.2.

3.1 Trust Network Analysis with Subjective Logic

for trust network analysis. TNA-SL uses the Subjective Logic (Jøsang, 2001) which enables to represent a specific belief calculus. There trust is expressed by a belief metric called opinion. An opinion is denoted by ωBA=(b,d,u,a) expressing the belief of a relying party A in the trustworthiness of another party B. The parameters b and d represent the belief resp. disbelief in B’s trustworthiness while d expresses the uncertainty of A about to trust B or not. The three parameters are all probability values between 0 and 1 and fulfill the constraint b + d + u = 1. The parameter a is called the base rate, and determines how uncertainty shall contribute to the opinion’s probability expectation value which is calculated as E(ωxA)=b+au. The opinion space can be mapped into the interior of an equal-sided triangle, where, the three parameters b, d, and u determine the position of the point in the triangle representing the opinion. Fig.1 illustrates an example where the opinion is ωx = (0.7, 0.1, 0.2, 0.5).

Fig. 1.

Opinion triangle with an example opinion (Jøsang, 2001).

Based on TNA-SL, there are two different types of trust relations: functional trust (FT) and referral trust (RT). The former concerns A’s direct trust in B performing a specific task; the latter concerns A’s trust in B giving a recommendation about someone else doing a task or in other words is the trust in the ability to refer to a third party. As mentioned in the introduction, the simplest form of trust inference is trust transitivity which is widely discussed in literature (Ding et al., 2005; Guha et al., 2004; Morselli et al., 2007; Quercia et al., 2007; Yang et al., 2002). That is, if A trusts B who trusts C, then A will also trusts C. A valid transitive trust path requires that the last edge in the path represents functional trust and that all other edges in the path represents referral trust. Referral trust transitivity and parallel combination of trust paths are expressed as part of TNA-SL model (figure 2) (Jøsang et al., 2006).

Fig. 2.

Referral trust transitivity and parallel combination of trust paths.

The discounting operator (⊗) (Jøsang, 2002) is used to derive trust from transitive trust paths, and the consensus operator (⊕) allows to combine parallel transitive trust paths. The trust network in figure 2 can then be expressed as

FTBA=RTDARTCDRTEARTCEFTBC

While we consider TNA-SL and the Subjective Logic as a suitable fundament for our similarity model, it can be, as already mentioned, adapted to all trust management models enabling to combine referral and functional trust (e.g., (Abdul-Rahman & Hailes, 2000; Grandison & Sloman, 2002)).

3.2 The Proposed Model

Our model for the estimation how much trust A can place in B considers not only direct experience and recommendations but also similarities between agents with respect of trusting other agents or being trusted by other parties. The two kinds of similarities between trusters resp. trustees can be gradually expressed by triples very similar to the first three operands of the opinion quadruples such that we can use the consensus operator of the subjective logic for the trust value computation.

3.2.1 The Main Idea

If A has functional trust in C who is similar to B (they are similar trustees), then A can infer its functional trust to B ((Ding et al., 2005), see figure 3(a)).

Fig. 3.

(a) Similar trustees (b) Similar trustees.

Our similarity opinion is a special form of referral trust. It reflects that the akin trust evaluations of B and C by several other trusters are a kind of recommendation by these agents to A to treat B and C similarly. Thus, we see the discounting operator as the correct mechanism to combine the similarity opinion between B and C with the functional trust of A in C in order to infer the functional trust of A in B:

FT1BA=SBCFTCAE1

As higher the similarity between B and C is, as closer the trust of A to B will equal to that between A and C. As lower this similarity is, as more uncertain A will be about whether to trust B or not.

If C has functional trust to B and A is similar to C (they are similar trusters), then A can also infer functional trust towards B ((Ding et al., 2005), see figure 3(b)).

This similarity opinion is discounted by the functional trust FTBC from C to B to form the new trust value.

FT2BA=SCAFTBCE2

Two trustees are similar if they are both similarly trusted by other agents Z1, Z2, …, Zn (figure 4(a)). This is an extension of TNA-SL in which it is not possible to infer any trust value of A towards B in a trust network.

Fig. 4.

(a) Similarly trusting (b) Similarly trusted.

We call C and A similar trusters if they have alike trust in several other agents Z1, Z2, …, Zn. In this case, if C has functional trust to a new agent B, then A can infer a functional trust to B (figure 4(b)). Again using TNA-SL alone, there is no way to infer a new trust value.

Similarly to Jøsang’s way to define opinions, we use triples to describe similarity which enables us to consider uncertainty. In particular, the degree of similarity depends on the number n of agents Z1, Z2, …, Zn used for the computation reflecting that we are more certain about the similarity of two parties if they are trusted by a significant large number of other agents in an akin way.

The similarity opinion SBC from C towards B is the triple2 (similarity, non-similarity, uncertainty). If C = B, the similarity opinion is defined to be (1, 0, 0). Otherwise, it is calculated based on the measure simte(C, B) of similarity between the two trustees C and B which is introduced in subsection 3.2.2:

SBC=nsimteCBc+nn1simteCBc+ncc+nE3

c is a constant determining how fast uncertainty is replaced by assurance. As higher its value is, as more agents are needed to reduce the uncertainty value in favor of the similarity and non-similarity values. The similarity opinion fulfills the constraints that the sum of all three values is equal to 1.

Like (3), the similarity opinion SCA from A to C is calculated using the measure of similarity simtr(C, A) between trusters which is also introduced in subsection 3.2.2:

SCA=nsimtrCAc+nn1simtrCAc+ncc+nE4

3.2.2 Similarity Calculation

In order to measure similarities, we model trusters, trustees, and trust relationships as a graph with nodes representing trusters and trustees and edges representing trust relations. The intuition behind our algorithm is that, similar trustees are related to similar trusters. More precisely, trusters A and B are similar if they are related to trustees C and D, respectively, and C and D are themselves similar. The base case is that each node is similar to itself. If we call this graph G, then we can form a node-pair graph G2 in which each node represents an ordered pair of nodes of G as depicted in figure 5. A node (A, B) of G2 points to a node (C, D) if, in G, A points to C and B points to D. Similarity scores are symmetric, so for clarity we draw (A, B) and (B, A) as a single node A, B (with the union of their associated edges) (Jeh & Widom, 2002).

Fig. 5.

Similarity measurement.

We propose an iterative fixed-point algorithm on G2 to compute similarity scores3 for node-pairs in G2. The similarity score for a node υ of G2 gives a measure of similarity between the two nodes of G represented by υ. Scores can be thought of as flowing from a node to its neighbors. Each iteration propagates scores one step forward along the direction of the edges, until the system stabilizes (i.e., scores converge). Since nodes of G2 represents pairs in G, similarity is propagated from pair to pair. Under this computation, two trustees are similar if they are trusted by similar trusters.

For each iteration k, iterative similarity functions simte,k (∗, ∗) for trustees and simtr,k (∗, ∗) for trusters are introduced. The iterative computation is started with sim0,∗ (∗, ∗) defined as

sim0,AB=1,ifA=B0,ifABE5

On the (k + 1)-th iteration, sim∗,k+1 (∗, ∗) is defined in special cases as

sim,k+1AB=1,ifA=Bsimte,k+1AB=0,ifIA=orIB=simtr,k+1AB=0,ifOA=orOB=E6

I(A) is the set of in-neighbors of A while O(A) specifies the set of A’s out-neighbors. Individual in-neighbors are denoted as Ii (A), for 1 i ≤ |I(A)|, and individual out-neighbors are denoted as Oi (A), for 1 i ≤ |O(A)|. simte,k+1 (*, *) is computed from simtr,k (*, *) in the general case as follows:

simte,k+1AB=i=1nj=insimtr,kIiAIjB1distanceIiAIjBABi=1nj=insimtr,kIiA,IjBE7

and simtr,k+1 (, ) is computed from simte,k (, ) in the general case as:

simtr,k+1AB=i=1nj=insimte,kOiAOjB1distanceABOiAOjBi=1nj=insimte,kOiA,OjBE8

Formulas (7) and (8) are alternately computed in iterations until the resulting similarity values simtr and simte converge.

The distance function is used to compare trust relations. distance(A, B, C, D) expresses the difference between the trust from A, B to C, D. It averages the Euclidean distances between the trust values of A and C resp. B and D on the opinion triangle (see figure 1):

distanceAACD=bAC+12uACbAD12uAD2+34uACuAD2distanceABCC=bAC+12uACbBC12uBC2+34uACuBC2distanceABCD=12((bAC12uACbBD12uBD)234(uACuBD)2(bAD12uADbBC12uBC)234(uADuBC)2)E9

For the sake of simplicity, all base rate values (aAD, aAC, aBD, aBC) are assumed to be 12. The factor 34 is used for the vertical axis to adapt the measures. Otherwise, the opinion triangle would be compressed and the distance between the points (0,1,0) and (0,0,1) would not be equal to one. Figure 6 illustrates the distance function graphically.

Fig. 6.

The distance between opinions.

Advertisement

4. Conclusion and Future Work

In order to overcome sparseness of the web of trust, we consider users’ similarity as a factor to derive trust connectivity and trust values. The main idea is that we account two persons similar if either a fair number of others have akin trust in them or if they themselves trust several other people alike. In the first case, every person who has trust in one of them can infer similar trust to the other one, at least as an estimated starting value. In the second case, a person may infer the trust value of a third party from other trusters similar to her. the results of our evaluation (Tavakolifard et al., 2009) lead to the expectation that the method TILLIT will increase the coverage of trust relationships significantly, and that the accuracy of the predicted additional will be fairly high as well.

We consider a similarity-based recommendation system for singers and songs as a good application example for our model. Normally, in systems like iTunes only the most popular songs or other songs of artists, of whom one already has bought songs, are advertised without any guarantee that one likes these songs as well. Using our approach, it is possible to find other customers who have an akin taste about music as the customer Alice reading the advertisements. Songs rated positively by these customers but not bought yet by Alice can be advertised to her since she will like them probably as well. This will make Alice more receptive to the advertisements.

We count all ratings of a buyer of songs of a particular singer to compute the trust value of that user to the singer by the metric mentioned before. A bipartite trust graph from users to singers can be formed based on these trust values (Figure 7). Then, (7) and (8) can be used to calculate similarity of singers and users respectively.

Fig. 7.

Application Scenario

In the future, we aim to evaluate the accuracy of a whole recommender system that employs our proposed model. Furthermore, we assess the possibility of modeling some of other trust propagation methods using our approach. An example is transposition resp. reciprocity (Guha et al., 2004) assuming that A’s trust in B causes B to develop also some level of trust towards A. Another propagation method is Coupling, in which A’s trust in C propagates to B because C and B trust people in common (Guha et al., 2004). This propagation rule is depicted in figure 8. According to this rule we can use the similarity between trusters to propagate the trust in one trustee to another.

Fig. 8.

Coupling: a trust propagation method.

Moreover, one can use similarity in a complete different way. Trust is very specific and nobody trusting Bob as a good car mechanic will automatically trust him also in undertaking heart surgeries. But probably, he will be capable in repairing motorcycles. Thus, there is a large similarity between the domains of repairing cars and motorcycles but a very low one between both of these and medical surgery. We think to use trust relations in one domain to infer ones in similar domains and consider ontologies describing the degrees of similarity between the domains as a useful means. All-in-all, we are convinced, that the various forms of similarity are good vehicles to tackle the major problem of too sparse webs of trust in online communities.

References

  1. 1. Abdul-Rahman, A. & Hailes, S. (2000). Supporting trust in virtual communities, Proceedings of the 33rd Hawaii International Conference, Volume 6, IEEE Computer Society Press, Maui, Hawaii.
  2. 2. Avesani, P., Massa, P. & Tiella, R. (2004). Moleskiing: trust-aware decentralized recommender system, 1st Workshop on Friend of a Friend, Social Networking and the Semantic Web. Galway, Ireland.
  3. 3. Avesani, P., Massa, P. & Tiella, R. (2005). A trust-enhanced recommender system application: Moleskiing, Proceedings of the 2005 ACM symposium on Applied computing, ACM New York, NY, USA, pp. 1589–1593.
  4. 4. Bedi, P. & Kaur, H. (2006). Trust based personalized recommender system, INFOCOM Journal of Computer Science 5(1): 19–26.
  5. 5. Bedi, P., Kaur, H. & Marwaha, S. (2007). Trust based recommender system for the semantic web, Proc. of the IJCAI07 pp. 2677–2682.
  6. 6. Ding, L., Kolari, P., Ganjugunte, S., Finin, T. & Joshi, A. (2005). Modeling and Evaluating Trust Network Inference, Technical report, MARYLAND UNIV BALTIMORE DEPT OF COMPUTER SCIENCE AND ELECTRICAL ENGINEERING.
  7. 7. Fu-guo, Z. & Sheng-hua, X. (2007). Topic-level Trust in Recommender Systems, Management Science and Engineering, 2007. ICMSE 2007. International Conference on, pp. 156–161.
  8. 8. Gal-Oz, N., Gudes, E. & Hendler, D. (2008). A Robust and Knot-Aware Trust-Based Reputation Model, Proceedings of IFIPTM 2008 - Joint iTrust and PST Conferences on Privacy, Trust Management and Security, Springer, pp. 167–182.
  9. 9. Golbeck, J. (2005). Semantic Web Interaction through Trust Network Recommender Systems, Proc. of the ISWC05 Workshop on End User Semantic Web Interaction, p. 2005.
  10. 10. Golbeck, J. (2006). Trust and nuanced profile similarity in online social networks, Journal of Artificial Intelligence Research.
  11. 11. Golbeck, J. & Hendler, J. (2004). Reputation network analysis for email filtering, Proceedings of the First Conference on Email and Anti-Spam, Vol. 44, pp. 54–58.
  12. 12. Grandison, T. & Sloman, M. (2002). Specifying and analysing trust for internet applications, Proceedings of the 2nd IFIP Conference on E-Commerce, E-Business & E-Government (I3E), Kluwer Academic Publisher, Lisbon, pp. 145–157.
  13. 13. Guha, R., Kumar, R., Raghavan, P. & Tomkins, A. (2004). Propagation of trust and distrust, Proceedings of the 13th international conference on World Wide Web, ACM Press New York, NY, USA, pp. 403–412.
  14. 14. Hwang, C. & Chen, Y. (2007). Using Trust in Collaborative Filtering Recommendation, Lecture Notes in Computer Science 4570: 1052.
  15. 15. Jeh, G. & Widom, J. (2002). SimRank: a measure of structural-context similarity, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, ACM Press New York, NY, USA, pp. 538–543.
  16. 16. Jøsang, A. (2001). A Logic for Uncertain Probabilities, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 9(3): 279–311.
  17. 17. Jøsang, A. (2002). The consensus operator for combining beliefs, Artificial Intelligence 141(1-2): 157–170.
  18. 18. Jøsang, A., Hayward, R. & Pope, S. (2006). Trust network analysis with subjective logic, Proceedings of the 29th Australasian Computer Science Conference-Volume 48, Australian Computer Society, pp. 85–94.
  19. 19. Jøsang, A. & Knapskog, S. J. (1998). A metric for trusted systems, Proceedings of the 21st National Security Conference, NSA.
  20. 20. Kim, Y., Le, M., Lauw, H., Lim, E., Liu, H. & Srivastava, J. (2008). Building a web of trust without explicit trust ratings, Data Engineering Workshop, 2008. ICDEW 2008. IEEE 24th International Conference on, pp. 531–536.
  21. 21. Kitisin, S. & Neuman, C. (2006). Reputation-based Trust-Aware Recommender System, Securecomm and Workshops, 2006 pp. 1–7.
  22. 22. Lathia, N., Hailes, S. & Capra, L. (2008). Trust-Based Collaborative Filtering, Proceedings of IFIPTM 2008 - Joint iTrust and PST Conferences on Privacy, Trust Management and Security, Springer, pp. 119–134.
  23. 23. Massa, P. & Avesani, P. (2004). Trust-Aware Collaborative Filtering for Recommender Systems, LECTURE NOTES IN COMPUTER SCIENCE pp. 492–508.
  24. 24. Massa, P. & Avesani, P. (2006). Trust-aware Bootstrapping of Recommender Systems, ECAI, Riva del Garda, Italy. Also available at: http://www.gnuband.org/papers/trustaware_bootstrapping_of_recommender_systems/[Last visit: 12.11. 2007].
  25. 25. Massa, P. & Bhattacharjee, B. (2004). Using Trust in Recommender Systems: An Experimental Analysis, Trust Management: Second International Conference, ITrust 2004, Oxford, UK, March 29-April 1, 2004: Proceedings, Springer.
  26. 26. Morselli, R., Bhattacharjee, B., Katz, J. & Marsh, M. (2007). Exploiting approximate transitivity of trust, Broadband Communications, Networks and Systems, 2007. BROADNETS 2007. Fourth International Conference on, pp. 515–524.
  27. 27. O’Donovan, J. & Smyth, B. (2005). Trust in recommender systems, Proceedings of the 10th international conference on Intelligent user interfaces, ACM New York, NY, USA, pp. 167–174.
  28. 28. Papagelis, M., Plexousakis, D. & Kutsuras, T. (2005). Alleviating the sparsity problem of collaborative filtering using trust inferences, Proceedings of iTrust, Springer, pp. 224–239.
  29. 29. Peng, T. & Seng-cho, T. (2009). iTrustU: a blog recommender system based on multi-faceted trust and collaborative filtering, Proceedings of the 2009 ACM symposium on Applied Computing, ACM New York, NY, USA, pp. 1278–1285.
  30. 30. Quercia, D., Hailes, S. & Capra, L. (2007). Lightweight Distributed Trust Propagation, Data Mining, 2007. ICDM 2007. Seventh IEEE International Conference on, pp. 282–291.
  31. 31. Tavakolifard, M., Herrmann, P. & Knapskog, S. (2009). Inferring trust based on similarity with tillit, Proceedings of the Third IFIP WG 11.11 International Conference on Trust Management (IFIPTM 2009), Springer, West Lafayette, USA.
  32. 32. Tavakolifard, M., Knapskog, S. & Herrmann, P. (2008a). Cross-Situation Trust Reasoning, Proceedings of The Workshop on Web Personalization, Reputation and Recommender Systems (WPRRSŠ08), IEEE Computer Society Press.
  33. 33. Tavakolifard, M., Knapskog, S. & Herrmann, P. (2008b). Trust Transferability Among Similar Contexts, Proceedings of The 4th ACM International Workshop on QoS and Security for Wireless and Mobile Networks (Q2SWinet 2008), ACM.
  34. 34. Victor, P., Cornelis, C., De Cock, M. & Pinheiro da Silva, P. (2008a). Gradual trust and distrust in recommender systems, Fuzzy Sets and Systems.
  35. 35. Victor, P., De Cock, M., Cornelis, C. & Teredesai, A. (2008b). Getting Cold Start Users Connected in a Recommender SystemŠs Trust Network, Computational Intelligence in Decision and Control 1 pp. 877–882.
  36. 36. Weng, J., Miao, C. & Goh, A. (2006). Improving collaborative filtering with trust-based metrics, Proceedings of the 2006 ACM symposium on Applied computing, ACM New York, NY, USA, pp. 1860–1864.
  37. 37. Yang, Y., Canberra, A., Brown, L., Wales, S., ADFA, A., Lewis, E. & Melbourne, V. (2002). W3 Trust Model: Evaluating Trust and Transitivity of Trust of Online Services, International Conference on Internet Computing, pp. 354–362.
  38. 38. Ziegler, C. & Golbeck, J. (2007). Investigating interactions of trust and interest similarity, Decision Support Systems 43(2): 460–475.
  39. 39. Ziegler, C. & Lausen, G. (2004). Analyzing correlation between trust and user similarity in online communities, Lecture notes in computer science pp. 251–265.

Notes

  • “Tillit” is the Norwegian word for trust.
  • This metric is inferred from a metric for the trust value computation (Jøsang & Knapskog, 1998) by Jøsang and Knapskog.
  • An alternative approach to measure this similarity is to model an agent’s mental structure as an ontology and using various methods proposed in our previous work (Tavakolifard et al., 2008a;b)

Written By

Mozhgan Tavakolifard

Published: 01 March 2010