Open access peer-reviewed chapter

Social Trust: Evaluating Node Influential Capability in Social Networks

Written By

Yap Hock Yeow and Lim Tong‐Ming

Submitted: 03 October 2016 Reviewed: 22 November 2016 Published: 19 July 2017

DOI: 10.5772/67021

From the Edited Volume

Recent Progress in Parallel and Distributed Computing

Edited by Wen-Jyi Hwang

Chapter metrics overview

1,410 Chapter Downloads

View Full Metrics

Abstract

Social networking sites are platforms that facilitate large‐scale information sharing activities in recent years. Many organizations analyze social networking traffic to gain market insights in order to observe the latest market trends. These analyses also allow organizations to identify key promoters who have strong influences on these social networking platforms to promote their products or services. It is hypothesized that social trust plays an important role in influential propagation, and it is able to increase the rate of success in influencing other social nodes in a social network. This research performs large‐scale experimental simulation to study the influential outcome with and without the presence of social trust in the social nodes.

Keywords

  • influence diffusion
  • influential maximization
  • social trust
  • trusted influence
  • domain specified trust influence

1. Introduction

In the last decade, the number of social networking site users have increased leap and bound [1]. Social networking sites are great places for one to express opinions toward people, products or services. Social networking sites disseminate information by influencing current and new nodes within the social networking environment. Gesenhues [2], Paquette [3] and Quesenberry [4] found that recommendations on the social networking sites often highly regarded by consumers. A research carried out by Ewing [5] showed that consumers often rely extensively on social networking sites referrals to make consumer decisions. In a world with many uncertainties, interacting with anonymous often raises trust issue. Trust presented various concerns especially for business operators where information spreading on the social networking sites may alter reputational impacts toward a business. There are many trust‐related studies [69] that were conducted by different researchers, and most of them strongly supported that trust played a key role in affecting one's decision. Without doubt, the use of social networking sites for large‐scale information sharing and message spreading is effective [1012], but there are still some shortcomings that need to be addressed where it includes online user‐generated contents and the assessment on their credibility. This research suggests two approaches to investigate trust as a factor on influential maximization and trust with domain specified social nodes as a factor on influential maximization. The objective of this research is to uncover trust value of each social node by evaluating social node opinion consistency and then to evaluate the rate of successful influenced social nodes with and without the presence of trust and domain specified trust. This research formulates two hypothesizes. They are as follows: (1) trust is able to positively increase the rate of successfully influenced social nodes within a social networking site, and (2) trust is able to positively increase the rate of successfully influenced social nodes from trusted social networking site users that are in the same domain. This research also reviewed extensively on trust and trust‐related implementation issues on social networking sites. This article will report the results gathered from the findings and presents a discussion of the two hypothesizes with a conclusion.

Advertisement

2. Related works

Constant engagement on social networking sites significantly raises the chance of exposing one's identity either voluntarily or not voluntarily [13]. Excessive disclosure of one's personal information raises privacy‐related issue and threat. In order to minimize the risk of overexposure one's identity, most if not all users on social networking sites [14] masked themselves using an Internet Identity (IID). They use IID to disguise themselves while communicating with others on these social networking sites. The process of uncovering the true identity of an online user one interacts with may not be easy as people constantly adopting new strategies to restrict the amount of information being disclosed on these social networking sites. Social networking sites play an important role at spreading information, news or ideas to all the connected nodes. Social networking sites create an endless source of information that is readily available for its users, but it is also undeniably that not all information obtained from social networking sites is always accurate and reliable. The fact that many social networking site readers and consumers rely extensively on the information obtained from these social networking sites to make their decision worried many business operators. Information spreading around social networking sites can change the public's viewpoint toward a product or service. Most of the user‐generated contents on social networking sites are text, it is always important for business operators to extract and analyze these user‐generated contents to identify key promoters and detractors and, at the same time, to identify genuine and trustable contents. Figure 1 illustrates how an opinion flows from different sources and how information is perceived at the recipients.

Figure 1.

Opinion propagation model.

Given the innumerable amount of information easily obtainable online, trusting a piece of information has always been a concern. In order to trust any content, it is important to understand what trust is. Trust is the most fundamental motivation behind different people cooperating together toward a common goal. There is no absolute definition of how trust can be initiated because the definition of trust varies for different applications, and it is always situational specific. Different researchers and research areas may also have their own definition of trust such as,

  • Trust is defined as willingness to rely on the other partner of the relationship [15].

  • Trust is the expectation of belief that any opportunistic behavior will appear by others [16].

  • Trust of one's performance depends on the actions of the counterpart [17].

  • Trust increases when expectations of the other party are consistently and reliably met, and decreases when the other party acts otherwise [18].

  • Trust is a mentality action a person uses while trying to reduce uncertainty and complexity when reaching an agreement [19].

  • Trust is the subjective probability by which an individual, A, expects that another individual, B, performs a given action on which its welfare depends [20].

    • Challenged: Falcone & Castelfranchi—having high reliability in a person in general does not necessarily sufficient to decide such person is very dependable [21].

It is also said that the value of trust changes when it is being applied differently. The Trust Referent Characteristic table developed by Mcknight [22] best describes the trust‐related characteristics that a node may display in an online environment. In this research, the Trust Referent Characteristics table has been improved with additional identified values by analyzing datasets obtained from [2325]. Table 1 shows the updated Trust Referent Characteristic table.

Trust‐related characteristic Second‐order conceptual category
Competent
Expert
Experience
Dynamic Competence
Predictable
Foresight Predictability
Good moral
Good will
Benevolent, Caring
Responsive Benevolence
Honest Integrity
Credible
Reliable
Dependable
Openness
Careful Psychology, mentality
Shared understanding
Contemplation Knowledgeable
Personally attractive Prospect

Table 1.

Trust referent characteristic table.

This research examines the integrity of user‐generated contents (highlighted in grey) as a form of trust on the social networking sites. Since social networking sites contain large amount of unstructured texts, content integrity can be analyzed by adopting text analysis algorithms from many researchers [2629]. Algorithmic details and latest research application updates can be found in Ref. [30, 31]; therefore, it will not be covered in this article. Diffusion equations used in this research are formulated using Kempe's [32, 33] activation function. Eq. (1) illustrates Kempe's function:

P v ( u , S ) = ƒ v ( S { u } )   ƒ v ( S ) 1   ƒ v ( S ) E1
Advertisement

3. Research methodology

The process of profiling trust involves two stages where the first stage uses Texted Oriented Opinion Mining [30] algorithm to analyze user‐generated text contents of each social node and return a trust probability score with a minimum value of 0 and a maximum value of 1, and the second stage is to analyze the cumulative objective score of each social node's text contents. The simulation uses Matlab r2016a in the experiment where genetic algorithm diffusion model (GADM) is the base algorithm that performs the influential diffusion. The Virtual Social Node (VSN) algorithm plays a simple yet important role in the influential diffusion process by simulating a virtual social network consisting of nodes and relationship links between nodes. The virtual social network simulated by VSN is structured as a ∪ b or c ∩ (a ∪ b) (Figure 2) such that A is the highest superset of all nodes in the social network, and a, b, cn are subsets of A denoted as {a, b, cn} ⊂ A and consist of a total of 5117944 social nodes. The influence diffusion adopts the bottom‐up approach where it initiates from the lowest subset all the way to the highest superset. These algorithms had been published in Refs. [31, 30, 34] therefore will not be discussed in detail in this article.

Figure 2.

Social network relationship diagram.

GADM operates in a way that an influence is diffused to any social nodes given the existence of a physical link between the source node and the recipient node. Any influence diffused by GADM is considered successful if the influence propagated and acknowledged by the recipient social node. A number of enhancements will be carried out on GADM. These enhanced algorithms are trust‐enhanced genetic algorithm diffusion model (T‐GADM) that includes trust values calculated from the uncovering of trusted social node process into the influential diffusion and calculation process. In addition, domain values are included into the enhanced T‐GADM resulting into the domain specified trust‐enhanced genetic algorithm diffusion model (DST‐GADM). On top of the diffusion of influence from trusted social nodes, DST‐GADM brings influence diffusion to a whole new level, where these influences will be diffused to target at recipient social nodes that shares the similar interest (or domains) with the source social node. The results generated by these algorithms are presented as probability values with a minimum value of 0 and a maximum value of 1, with an accuracy of 3 decimal points. Details of the algorithm design are discussed in Section 4.

Advertisement

4. Algorithm design, development and implementation

This section discusses the design, development and implementation of the algorithms in this research. There are two algorithms to be discussed: trust‐enabled generic algorithm diffusion model (T‐GADM) and domain specified trust‐enabled generic algorithm diffusion model (DST‐GADM). These algorithms will be discussed in their respective subsections. It is also acknowledged that the operation of T‐GADM and DST‐GADM also rely on two additional sub‐modules namely the enhanced text analysis (E‐TA) with the presence of Trusted Social Node algorithm (TSN) and Virtual Social Node algorithm (VSN). These algorithms had been published in Refs. [30, 31, 34] therefore will not be discussed in this article.

4.1. Trust‐enhanced Generic Algorithm Diffusion Model

Trust‐enhanced Generic Algorithm Diffusion Model (T‐GADM) is the enhanced algorithm version from generic algorithm diffusion model (GADM) where the source social node's trust value is taken into consideration when calculating the recipient social node's influence acceptance. The likelihood of a social node accepting an influence given the trusted source node is calculated by the application of certainty factor that measures the belief or disbelief conditional probabilities Pr(Ai|T) as illustrated in Eq. (2).

Pr ( A i |T ) =   Pr ( T|C i ) Pr ( A ) Pr ( T|C i ) Pr ( A ) +   Pr ( T| ¬ C i ) Pr ( ¬ A )   E2

Where:

  • Pr(Ai|T): Chance of having an influence accepted by the recipient (Ai) given a trusted source (T). This is what the algorithm wants to calculate.

  • Pr(T|Ci): Chance of having a trusted social node (T) given that it shows significant interactive consistency.

  • Pr(A): Chance of having a recipient node to accept an influence.

  • Pr(¬A): Chance of having a recipient node to not accept an influence.

  • Pr(T|¬Ci): Chance of having a trusted social node (T) given that does not show significant interactive consistency.

The results generated from Eq. (3) are then used to evaluate the certainty factor that the influence has been successfully ingested. This is achieved by applying Eq. (3) as measure‐of‐belief and Eq. (4) for measure‐of‐disbelief, then finally calculating the certainty values by applying Eq. (5).

MB( H a c c e p t  |  E t r u s t ) =   max [   Pr ( A  | T ) , Pr ( α ) ] Pr ( α ) 1 Pr ( α )   E3
MD( H a c c e p t  |  E t r u s t ) =   min [   Pr ( A  | T ) , Pr ( α ) ] Pr ( α ) 1 Pr ( α )   E4
CF =   MB( H a c c e p t  |  E t r u s t )  MD( H a c c e p t  |  E t r u s t ) 1 min [  MB( H a c c e p t  |  E t r u s t ) , MD( H a c c e p t  |  E t r u s t ) ]   E5

Eqs. (4)(6) calculate the following parameters:

  • PrAT: The probability of acceptance given the trust variance of a source node.

  • MB_HaEt: The strength (measure) of belief the node will accept the influence.

  • MD_HaEt: The strength (measure) of disbelief the node will accept the influence.

  • cf: Certainty factor grouping of the current evaluating node.

  • success: Describes the successfulness of the node being influenced. This criterion only applies if the threshold value is set prior to the evaluation. There is no specification on what the threshold value should be used. The threshold value is chosen based on the assessor's preferred influential range.

The following pseudocode illustrates the implementation of the T‐GADM influential diffusion algorithm.

START

//Eqn ver 1 w/trust (Baysean Prop)

FUCNTION probability_v1(St,Ra)

Set PrAT = ((Ra * St) + St)/2;

return round(PrAT, 4);

END FUNCTION

//Eqn ver 2 w/trust (Enhanced Baysean)

FUNCTION probability_v2(St,Ra)

Set notRa = 1 ‐ Ra;

Set notSt = 1 ‐ St;

Set PrAT = ((Ra * St)/((Ra * St) + (notRa * notSt))) *

(1 ‐ 0.13);

return round(PrAT, 4);

END FUNCTION

FUNCTION belief(const,PrAT,St,Ra)

Set MB_HaEt = ((max(PrAT,constant)) ‐ constant)/(1 ‐ constant);

return round(MB_HaEt, 4);

END FUNCTION

FUNCTION disbelief(const,PrAT,St,Ra)

Set MD_HaEt = ((min(PrAT, constant)) ‐ constant)/(0 ‐ constant);

return round(MD_HaEt, 4);

END FUNCTION

FUNCTION cf(MB_HaEt,MD_HaEt){

Set cf = (MB_HaEt ‐ MD_HaEt)/(1 ‐ (min(MB_HaEt, MD_HaEt)));

return round(cf, 4);

END FUNCTION

//Check Threshold

FUNCTION checkThresh(value,threshold){

IF input >= threshold

return ‘accepted';

ELSE

return ‘rejected';

END IF

END FUNCTION

END

4.2. Domain specified trust‐enhanced generic algorithm diffusion model

This research also investigates the application of domain‐related trusted social nodes in order to examine whether this will further increase the rate of successfully influenced social nodes on a social network platform. Domain in this research represents the interest groups a social node involves or participates in. Domain specified analysis is an extended module from the T‐GADM. In the domain specified trust influence diffusion algorithm, an additional step that uncovers domains of each social node from the dataset is needed. All the domain relationship links harvested are illustrated as a conceptual diagram in Figure 3.

Figure 3.

Harvested domains and domain links for domain specified trust influential maximization conceptual diagram.

The relationship between two domains is calculated by applying the concept mapping [35, 36] and weighing technique [37] where domain relationships are classified into three tiers:

  • Domain major (node)—Is a form of specification or excellency to which a variable formally commits or inherits. Any domain can be a domain major, but only one domain can be the domain major at any given time.

  • Level 2 domain (solid line)—Consist of entities that are directly correlate to the domain major (part of, extension of, subset) but do not serve as a domain major at point of analysis.

  • Domain Minor (dashed line)—Consist of entities that are mostly not directly correlate to the domain major nor the second‐order domain, but domain may consist of certain entities that portray traits and characteristics that relate to the domain major.

Each harvested domain, domain relationship links and domain relationship weights are then indexed using Domain ID (Domain Identity represented by a value) (as seen in Table 2).

Domain major Level 2 domain Domain minor
ID Weight ID Weight ID Weight
[01] 1.0 [16] 0.7821 [06] 0.3237
[02] 1.0 [10] 0.6762
[03] 1.0 [12] 0.2997
[04] 1.0 [08] 0.8967 [12] 0.5521
[05] 1.0 [14] 0.6887 [16] 0.2194
[06] 1.0 [09] 0.4571 [01] 0.2857
[16] 0.3321 [24] 0.1756
[07] 1.0 [10] 0.3123 [24] 0.2752
[15] 0.5231
[08] 1.0 [04] 0.6774 [12] 0.5882
[09] 1.0 [06] 0.7987 [15] 0.1725
[10] 0.2141
[16] 0.2365
[10] 1.0 [02] 0.2379 [09] 0.1423
[07] 0.2656
[11] 1.0 [13] 0.4597 [12] 0.2178
[12] 1.0 [11] 0.2287
[04] 0.3212
[08] 0.3101
[13] 1.0 [11] 0.3427 [16] 0.1563
[14] 1.0 [05] 0.5638
[15] 1.0 [07] 0.8469 [09] 0.2112
[23] 0.1211
[16] 1.0 [01] 0.5485 [09] 0.1653
[06] 0.2324 [05] 0.1536
[17] 1.0 [09] 0.4439 [06] 0.2846
[18] 1.0
[19] 1.0 [22] 0.4575 [21] 0.2133
[20] 1.0 [19] 0.1556
[21] 1.0 [22] 0.3354 [19] 0.1757
[22] 1.0 [19] 0.4121 [10] 0. 2675
[02] 0.1294
[23] 1.0 [24] 0.3216 [06] 0.1128
[15] 0.2144
[24] 1.0 [15] 0.4174 [23] 0.1127
[07] 0.2152 [10] 0.1216

Table 2.

Weighted domain specified referent table.

Domain specified trust influence uses Eq. (6) for social influential acceptance‐related calculations.

DwPr ( A i |T ) = [ Pr ( A i |T ) + Pr ( A i |T ) ( D m j r + L 2 + D m n r ) ] 1 + Pr ( A i |T ) ( D m j r + L 2 + D m n r ) E6

Where

  • DwPr(Ai|T): Chance of having an influence accepted by the recipient (Ai) given a trusted source (T) with the presence of certain domain entities (Dw).

  • Pr(Ai|T): Chance of having an influence accepted by the recipient (Ai) given a trusted source (T). This is what the algorithm previously calculated.

  • Dmjr: Represents the weight inherited by the domain major. In this research, since the domain major represents absolute identical relationship to the domain inherited by the node in analysis, therefore the concept weight between the node in analysis and domain major is 1.0.

  • ∑⋂L2: Represents the summation of all weights of intersected level 2 domains with the domain inherited by the node in analysis.

  • ∑⋂ Dmnr: Represents the summation of all weights of intersected domain minors with the domain inherited by the node in analysis.

The following pseudocode illustrates the implementation of the DST‐GADM influential diffusion algorithm

START

Import domains as domainsCache

//Eqn ver 3 w/trust and domain (Enhanced Baysean)

FUNCTION probability_v3(St,Ra,links,levels)

Set mjr = 0;

Set l2 = 0;

Set mnr = 0;

//Explode node links

Set links = explode(‘|', links);

Set major = links.getLastElement();

//Find associative domains

FOREACH domainsCache as key and payload

IF payload.get['major'] == major

Set use = domainsCache.get[key];

END IF

END FOREACH

//Prep traverse levels

Set traverseArrays = [];

IF levels == 1

Set mjr = (float) mjr(use,links);

ELSE IF levels == 2

Set mjr = (float) mjr(use,links);

Set l2 = (float) level2(use,links);

ELSE IF levels == 3

Set mjr = (float) mjr(use,links);

Set l2 = (float) level2(use,links);

Set mnr = (float) mnr(use,links);

END IF

Set notRa = 1 ‐ Ra;

Set notSt = 1 ‐ St;

Set prob = ((Ra * St)/((Ra * St) + (notRa * notSt))) *

(1 ‐ 0.13);

Set PrAT = (prob + (prob * mjr) + (prob * l2) + (prob * mnr))/

(1 + (prob * mjr) + (prob * l2) + (prob * mnr));

return round(PrAT, 4);

END FUNCTION

//Calculate domain weightages (returns calculated values)

FUNCTION mjr(use,links)

IF links.getLastElemet() == use.get['major'])

Set payloads = use.get['payload'];

return payloads['majorWeight'];

ELSE

return 0;

END IF

END FUNCTION

FUNCTION level2(use,links)

Set payloads = use.get['payload'];

Set payloads = payloads.get['level2'];

Set cumulativeWeight = 0;

FOREACH links as link

FOREACH payloads as payload

IF payload.get['domainID'] == link

Set cumulativeWeight += payload.get['weight'];

END IF

END FOREACH

END FOREACH

return cumulativeWeight;

END FUNCTION

FUNCTION mnr(use,links)

Set payloads = use.get['payload'];

Set payloads = payloads.get['minor'];

Set cumulativeWeight = 0;

FOREACH links as link

FOREACH payloads as payload

IF payload.get['domainID'] == link

Set cumulativeWeight += payload.get['weight'];

END IF

END FOREACH

END FOREACH

return cumulativeWeight;

END FUNCTION

4.3. Acceptance threshold

Threshold value represents the level of certainty both the source node and the recipient node must adhere to. By increasing the threshold value means increasing the quality of influencing messages and the level of trust propagated from the source social node, hence it potentially reducing the number of successfully influenced social nodes. Since there is no common specification on what threshold value should be used and the values often depend on the researcher's preference, threshold value used in this research is set to neutral or zero (0).

Advertisement

5. Results and discussion

In this article, results generated from the proposed genetic algorithm diffusion model (GADM), enhanced genetic algorithm diffusion model (T‐GADM) and domain specified trust‐enhanced genetic algorithm diffusion model (DST‐GADM) will be compared and discussed respectively. Figure 4 shows the difference between results generated from the base algorithm and the trust‐enhanced algorithm on the effects of the rates of successfully influenced social nodes within the simulated social networking environment, whereas Figure 5 illustrates the successful influence acceptance rates with threshold value set to default (0).

Figure 4.

Acceptance probability for GADM vs. T‐GADM.

Figure 5.

Acceptance statistics for GADM vs. T‐GADM.

All percentages presented in this section are calculated by averaging the differences on the rate of successfully influenced social nodes between two or more algorithms where the GADM algorithm will serve as the benchmark. The analysis showed that by comparing results generated between GADM (base algorithm without trusted social node) and T‐GADM (enhanced algorithm with trusted social node), the T‐GADM algorithm yields 5.79% increment on the rate of successfully influenced social nodes compared to GADM. Such increment on the rate of successfully influenced social nodes is because social nodes that are trustworthy have higher tendency of being accepted by other social nodes; therefore, influence spread by these trustworthy social nodes may strongly be accepted. Furthering the analysis, results generated show that the DST‐GADM (domain specified trust influence) yields a further 4% improvement on the rate of successfully influenced social nodes when it is compared to T‐GADM (enhanced algorithm with trusted social node) and about 10% improvement on the rate of successfully influenced social nodes when it is compared to GADM (base algorithm without trusted social node). Such improvement is expected since it is said that trusted social nodes that share the similar interest would be better at influencing social nodes within the interest group. Analysis results are illustrated in Figures 6 and 7. Both results also concluded the hypothesis presented in this article where it is said that social trust plays an important role in influential propagation, and it is able to increase the rate of success in influencing other social nodes in a social network.

Figure 6.

Acceptance probability for GADM vs. T‐GADM vs. DST‐GADM Tier 1, 2 and 3.

Figure 7.

Acceptance rates for GADM vs. T-GADM vs. DST-GADM Tier 1, 2 and 3.

Advertisement

6. Conclusion

The outcome of this research has shown considerable increments in the rate of successfully influenced social nodes between 5.58% and 5.89% with the presence of social trust within social nodes. The result has also shown that the rate of successfully influenced social nodes can be further improved with the introduction of domain specified trust with an additional increment between 0.02 and 4.31%. The results also suggest that the rate of successfully influenced social nodes may vary when different levels of domain relationship links are introduced such as the presence of domain majors, level 2 domains and domain minors, although it is also seen that some of social nodes may not be affected by the additional levels of domain relationship links. It is also found that some social nodes’ acceptance probability dropped while comparing between the base algorithm and the trust enabled algorithm. This outcome is aligned with our initial expectation because the drop in certain social nodes’ acceptance probability can be caused by source nodes with lower trust values. On the whole, the dataset used in this research had an average increment value of 9% on the rate of successfully influenced social nodes with the application of trusted social node and domain‐specified trust, including domains from all three tiers of domain relationship links classified for this research. Although the incremental percentages is insignificant, but consider the size of the dataset is large and the characteristics of different dataset used in the research may influence the performance, the marginal improvement found needs to be taken into consideration in this context. Finally, research also foresees two possible future researches that can be applied to further enhance the social nodes influence strength and the acceptance results obtained from this research. These applications include distance weighted trust and trust value down the lane.

References

  1. 1. I. T. Union, “Internet Users (Per 100 People) 1981–2014,” The World Bank, December 2014. [Online]. Available: http://data.worldbank.org/indicator/IT.NET.USER.P2. [Accessed 2 July 2015].
  2. 2. A. Gesenhues, “Survey: 90% Of Customers Say Buying Decisions Are Influenced By Online Reviews,” Marketing Land, United States, 2013.
  3. 3. H. Paquette, “Social Media as a Marketing Tool: A Literature Review,” University of Rhode Island, United States, 2013.
  4. 4. K. Quesenberry, “Social Media Is Too Important to Be Left to the Marketing Department,” Harvard Business Publishing, United States, 2016.
  5. 5. M. Ewing, “71% More Likely to Purchase Based on Social Media Referrals,” HubSpot, United States, 2011.
  6. 6. A. Josang, R. Ismail and C. Byod, “A Survey of Trust and Reputation Systems for Online Service Provision,” in Decision Support Systems, Amsterdam, 2007.
  7. 7. J. Caverlee, L. Liu and S. Webb, “The SocialTrust framework for trusted social information management: Architecture and algorithms,” Journal of Information Science, vol. 180, no. 1, pp. 95–112, 2010.
  8. 8. E. Hargittai, L. Fullerton, E. Menchen‐Trevino and K. Thomas, “Trust online: young adults’ evaluation of web content,” International Journal of Communication, vol. 4, no. 1, pp. 468–494, 2010.
  9. 9. P. Resnick and R. Zeckhauser, “Trust among strangers in internet transactions: empirical analysis of ebay's reputation system,” Advances in Applied Microeconomics, vol. 11, no. 2, pp. 127–157, 2002.
  10. 10. S. Hendrickson, “Case Study: How Content Diffuses Through Different Social Networks,” Social Media Today, United States, 2013.
  11. 11. H. Leonard, “The Fascinating Spread of Content Through Social Networks,” Business Insider, United Kingdom, 2013.
  12. 12. J. Tierney, “Social Media Helps Police, Cities Spread Information,” Daily Herald, Utah, 2013.
  13. 13. B. Marcus, F. Machilek and A. Schutz, “Personality in cyberspace: personal Web sites as media for personality expressions and impressions,” Journal of Personality and Social Psychology, vol. 90, no. 6, pp. 1014–1031, 2006.
  14. 14. A. Renninger and W. Shumar, Building Virtual Communities: Learning and Change in Cyberspace, United Kingdom: Cambridge University Press, 2002.
  15. 15. R. Morgan and S. Hunt, “The commitment‐trust theory of relationship marketing,” Journal of Marketing, vol. 58, no. 1, pp. 20–38, 1994.
  16. 16. D. Gefen, E. Karahanna and D. Straub, “Trust and TAM in online shopping: an integrated model,” MIS Quarterly, vol. 27, no. 1, pp. 51–90, 2003.
  17. 17. E. Yildirim, “The effects of user comments on e‐trust: an application on consumer electronics,” Journal of Economics, Business and Management, vol. 1, no. 4, pp. 360–364, 2013.
  18. 18. J. Weiseberg, D. Te'eni and L. Arman, “Past purchase and intention to purchase in e‐commerce: the mediation of social presence and trust,” Internet Research, vol. 21, no. 1, pp. 82–96, 2011.
  19. 19. M. Gustavsson and A.‐M. Johansson, “Consumer Trust in E‐commerce,” in Kristianstad University, Sweden, 2006.
  20. 20. D. Gambetta, “Can We Trust Trust?,” in Basil Blackwell, Oxford, 1988.
  21. 21. R. Falcone and C. Castelfranchi, “Social Trust: A Cognitive Approach,” in Kluwer, Netherlands, 2001.
  22. 22. H. McKnight, “What trust means in e‐commerce customer relationships: an interdisciplinary conceptual typology,” International Journal of Electronic Commerce, United States, 2002.
  23. 23. J. McAuley, P. Rahul and J. Leskovec, “Inferring networks of substitutable and complementary products,” in SIGKDD, Sydney, 2015.
  24. 24. J. McAuley, C. Targett, Q. Shi and A. v. d. Hengel, “Image‐based recommendations on styles and substitutes,” in SIGIR Santiago ‐ Chile, Santiago, 2015.
  25. 25. M. Richardson, R. Agrawal and P. Domingos, “Trust management for the semantic web,” in International Semantic Web Conference, Florida, 2003.
  26. 26. M. Q. Hu and B. Liu, “Mining and summarizing customer reviews,” in ACM International Conference on Knowledge Discovery and Data Mining, New York, 2004.
  27. 27. S.‐M. Kim and E. Hovy, “Determining the sentiment of opinions,” in International Conference on Computational Linguistics, Stroudsburg, 2004.
  28. 28. P. Bo and L. Lee, “Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales,” in Annual Meeting on Association for Computational Linguistics, Stroudsburg, 2005.
  29. 29. T. Wilson and J. Wiebe, “Recognizing strong and weak opinion,” Computational Intelligence, vol. 22, no. 2, pp. 73–99, 2006.
  30. 30. Y. Hock‐Yeow and L. Tong‐Ming, “An evaluation and enhancement of the sentiment oriented opinion mining technique using opinion scoring,” in International Conference on Innovation and Sustainability 2015, Chiang Mai, 2015.
  31. 31. Y. Hock‐Yeow and L. Tong‐Ming, “An analysis of opinion variation of social text in the trusted social node identification,” in American Scientific Publishers Advance Science Letters, Sabah, Malaysia, 2016.
  32. 32. D. Kempe, J. Kleinberg and E. Tardos, “influential nodes in a diffusion model for social networks,” in Proceedings of the 32nd International conference on Automata, Languages and Programming, Berlin, 2005.
  33. 33. D. Kempe, J. Kleinberg and E. Tardos, “Maximizing the spread of influence through a social network,” in Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, New York, 2003.
  34. 34. Y. Hock‐Yeow and L. Tong‐Ming, “Trusted Social Node: Evaluating the Effect of Trust and Trust Variance to Maximize Social Influence in a Multilevel Social Node Influential Diffusion Model,” in Springer Lecture Notes in Computer Science, Beijing, 2016.
  35. 35. J. Novak, “The Theory Underlying Concept Maps and How to Construct and Use Them,” Institute for Human and Machine Cognition, United States, 2006.
  36. 36. J. Novak, “Learning, creating, and using knowledge: concept maps as facilitative tools in schools and corporations,” Journal of e‐Learning and Knowledge Society, vol. 6, no. 3, pp. 21–30, 2010.
  37. 37. C. Lin and R. Nayak, “A case study of failure mode analysis with text mining methods,” in 2nd International Workshop on Integrating Artificial Intelligence and Data Mining, Australia, 2007.

Written By

Yap Hock Yeow and Lim Tong‐Ming

Submitted: 03 October 2016 Reviewed: 22 November 2016 Published: 19 July 2017