Open access peer-reviewed chapter

A Review of Several Privacy Violation Measures for Large Networks under Active Attacks

Written By

Tanima Chatterjee, Nasim Mobasheri and Bhaskar DasGupta

Submitted: 06 August 2019 Reviewed: 19 December 2019 Published: 24 January 2020

DOI: 10.5772/intechopen.90909

From the Edited Volume

Security and Privacy From a Legal, Ethical, and Technical Perspective

Edited by Christos Kalloniatis and Carlos Travieso-Gonzalez

Chapter metrics overview

774 Chapter Downloads

View Full Metrics

Abstract

It is by now a standard practice to use the concepts and terminologies of network science to analyze social networks of interconnections between people such as Facebook, Twitter and LinkedIn. The powers and implications of such social network analysis are indeed indisputable; for example, such analysis may uncover previously unknown knowledge on community-based involvements, media usages and individual engagements. However, all these benefits are not necessarily cost-free since a malicious individual could compromise privacy of users of these social networks for harmful purposes that may result in the disclosure of sensitive data that may be linked to its users. A natural way to avoid this consists of an “anonymization process” of the relevant social network. However, since such anonymization processes may not always succeed, an important research goal is to quantify and measure how much privacy a given social network can achieve. Toward this goal, some recent research works have aimed at evaluating the resistance of a social network against active privacy-violating attacks by introducing and studying a new and meaningful privacy measure for social networks. In this chapter, we review both theoretical and empirical aspects of such privacy violation measures of large networks under active attacks.

Keywords

  • social networks
  • privacy measure
  • active attacks
  • (k
  • ℓ)-anonymity
  • algorithmic complexity

1. Introduction

In recent years, social networks have become an indisputable part of people’s lives. The emergence of such networks has altered how we interact with the world. A given individual’s day-to-day activities like media consumption, job hunting and social interaction have changed, along with how businesses and other beneficial entities interact with them through marketing, advertising, and information diffusion. This has led to an unstoppable race of collecting information and interaction from social networks by researchers, governments, and business entities for various purposes. From a research point of view, social networks and their interaction mechanisms provide valuable insight in many fields of study, such as sociology, psychology, advertising, and recommendation systems. It is only natural that the information contained in these networks and the value they hold have been and will be targeted by bad actors for malicious activities. The importance of these networks and the value of information that can be retrieved from them have led social network researchers to take a closer look at methods to combat such bad actors as well as formulate network measures that can provide an insight to the privacy of these networks. In this survey, we will look at one such measure known as k -anonymity [1] and will discuss some theoretical and empirical results regarding this measure.

1.1 Overview of the paper

Given the irrefutable importance of social networks in our daily lives and the ever increasing risk of compromising valuable personal data through privacy attacks against these networks, it is preferable to know how secure a given social network is against privacy attacks. This necessitates a deeper look into the types of privacy attacks and how to cope with them. There is an extensive literature on privacy preserving computational models in variety of application areas such as multi-party communications or distributed computing settings [2, 3, 4, 5, 6]. In this chapter, we focus on a specific type of attack known as background-based active attack and one measure that reflects the resistance of any given network against such attacks. The organization of the rest of the paper is as follows:

  • In Section 2 we briefly discuss the notion of privacy in social networks and review some literature on privacy violating attacks on social networks. We also introduce the k -anonymity privacy measure and some corresponding network measurement which are the basis for this measure.

  • In Section 3 we review some basic terminologies and notations that will be used in formulation of the three problems introduced in Section 4.

  • Section 4 contains three problems that arise from theoretical investigation of the k -anonymity.

  • Section 5 contains the results of an empirical study on the resistance of real-world social networks.

  • Finally, we end this chapter with some concluding remarks in Section 6.

Advertisement

2. Privacy measures in social networks

We begin by discussing the mathematical structure that fit the most to represent social networks. A social network is often portrayed as a graph [7, 8] G = V E where V is a set of nodes representing the social members, and E is the set of edges portraying the relationship among these members. Both nodes and edges may have extra attributes, such as weights, that provide extra information about the nature of these social bonds (e.g., trust or popularity); however, throughout this survey we will consider the simplest form of graphs, namely undirected and unweighted graphs, to model our social networks.

As we discussed in the previous section, the information that the social networks provide are invaluable. Due to the very nature of many social network applications, the identity of the members or the nature of relationship between members is quite sensitive and valuable. Thus, when releasing a social network we want to remove any attributes that may help identify these kinds of sensitive data. Assuming all members and their relationships are of high sensitivity, preventing identity disclosure or link disclosure becomes an important task. One popular method to prevent such disclosures is anonymization. In an anonymization process, we publish the network without identifying the corresponding nodes or potentially identifiable attributes. Even after anonymizing the network, we will still be releasing many informative attributes encoded by the network structure; for example, attributes such as node degree, connectivity, or other similar graph properties can still help the adversaries in compromising the user privacies of a published network.

Adversaries usually rely on background knowledge to compromise the privacy of published anonymized social networks. For understanding the failure of current privacy preservation methods such as anonymization, we need to have a proper model for the adversary background knowledge. Although it’s challenging to have a comprehensive model of all possible types of adversary background knowledge, it is very useful to model the background knowledge via structural properties of networks such as node degrees, embedded subgraphs, node neighbors, etc. [9]. Backstrom et al. [10] were the first to introduce a category of attacks on anonymized social graphs. The models introduced in [10] are background-based attacks and are widely used in privacy analysis of social networks. The two main types of attacks are as follows.

  1. Passive attacks in which the adversary will not modify the network by injecting new nodes, but instead will use the structural knowledge to detect the location of a known node. In this type of attacks, the adversary can benefit from the fact that most nodes in real social networks often belong to a small uniquely identifiable subgraph [10]. An adversary can then build a coalition with members of such subgraphs and attempt to re-identify the subgraphs in the anonymized published network, thus compromising the privacy of neighboring nodes.

  2. Active attacks in which the adversary will choose an arbitrary set of target users, create new nodes and insert them into a social network in a way that they are connected to the target set and they form a distinguishable subgraph. After the anonymized version of the social network is published, the adversary can then use the subgraph as a fingerprint to re-identify the targeted users and compromise their privacy.

The authors in [10] also showed that it is possible to compromise the privacy of any social network of n nodes with high probability using only O log n attacker nodes. In a passive attack, adversary’s structural knowledge will give her/him a global view of the network depending on the global structure of the network. It could pose a high privacy risk if an adversary were to combine this global view with the local structural knowledge obtained using an active attack. As an example, consider the network in Figure 1. If we only have global structural knowledge, it is not possible to differentiate the nodes v 3 and v 4 (e.g. , same node degrees, etc.). However, controlling just one extra node in the graph, such as the node v 1 , provides local structural knowledge such as distances between nodes, and using the knowledge of the distance of v 1 from v 3 and v 4 ( d v 1 , v 3 = 1 and d v 1 , v 4 = 2 ) one can easily differentiate node v 3 from node v 4 .

Figure 1.

A simple graph G used in Section 2 to illustrate the high risk posed by combining knowledge gained by active and passive attacks.

There are several well-studied strategies for coping with active attacks on a social network [9, 11, 12] via addressing the anonymization process of the social network. However, in this chapter we will focus on a measure that evaluates how resistant a social network is against this type of privacy attack. Introduced by Trujillo-Rasua et al. [1], k -anonymity is a novel and, to the best of our knowledge, the only privacy measure examining the structural resistance of a given graph against active attacks. The k -anonymity is a measure based on metric representation of nodes, where k is a privacy threshold and l is the maximum number of attacker nodes that may be inserted in the network. It was shown in [1] that graphs satisfying k -anonymity can successfully deter adversaries controlling at most l nodes in the graph from re-identifying nodes with probability higher than 1 k .

2.1 k -anonymity

The k -anonymity measure is based on a concept known as k-metric anti-dimension of graphs. To facilitate further discussions about the measure, we first introduce some notations and terminologies. For a simple connected graph G = V E , where V is set of nodes and E is set of edges, let dist v i , v j denote distance (i.e. , number of edges in a shortest path) between the nodes v i and v j . Given and ordered set of nodes S = v 1 v t and a node u we define the metric representation of u with respect to S as a vector d u , S = dist u , v 1 dist u , v t . Metric representations of nodes are closely related to the concept of a resolving set of a graph. Inspired by the problem of identifying an intruder in a network and introduced separately by Slater [13] and by Harary and Melter [14], a resolving set of graph provides recognition of every pair of nodes in graph.

Definition 1 (resolving set). Given a graph G = V E , a subset S V is called a resolving set for G if, for each pair of nodes u v G , there exist a node x S such that dist x , u dist x , v . A smallest-cardinality resolving set is called the metric basis, and its cardinality is referred to as the metric dimension of G.

The concepts of metric representation and resolving set inspired the introduction of another network measure known as k-antiresolving set that will be used as the founding base for k -anonymity.

Definition 2 (k-antiresolving set). Given a graph G = V E , S V is called a k-antiresolving set of G if k is the largest integer such that, for every node v V \ S , there exist at least k 1 nodes u 1 , u 2 , , u k 1 V \ S with the same metric representation with respect to S as v .

A k-antiresolving set of minimum cardinality is called a k-antiresolving basis, and its cardinality denotes the k-metric antidimension adim k G of G. Note that the k-antiresolving set may not exist for every k in a graph.

The (k,l)-anonymity measure is built upon the k-antiresolving set concept. Assume the adversary has gained control of a subset S of nodes in the graph G, where S is a k-antiresolving set for G. Then the adversary cannot uniquely re-identify any node based on the background knowledge (namely, the knowledge of metric representation of a node v with respect to S) with probability higher than 1 k . k -anonymity is formally defined as [1].

Definition 3 ( k -anonymity). A graph G under active attack satisfies k -anonymity if k is the smallest positive integer so that the k-metric antidimension of G is less than or equal to l.

In the above definition, k is a parameter depicting the privacy threshold and l represents the maximum number of attacker nodes. It is safe to assume that number of attacker nodes l is significantly smaller than number of nodes present in the network as injecting attacker nodes or gaining control of existing nodes is difficult without being detected [15].

Advertisement

3. Basic terminologies and notations

For the exposition in the remainder of this chapter, we will need some notations and terminologies which we introduce here. Consider the (undirected unweighted) graph G in Figure 2. We will use this graph to illustrate the terminologies and notations that are introduced.

  • The metric representation of node v i is denoted by d v i = dist v 1 , v i dist v 2 , v i dist v n , v i .

    • For example, in Figure 2, d v 1 = 0,1,2,3,3,2

  • The diameter of G is the length of the longest shortest path and is denoted by diam G = max v i , v j V dist v i , v j .

  • The open neighborhood of node v i is a subset of all nodes directly connected to v i and denoted by Nbr v i = v j v i v j E .

    • For example, in Figure 2, Nbr v 2 = v 1 v 3 v 6 .

  • The metric representation of a node v i with respect to a subset such as S V is denoted by d v i , S .

    • For example, in Figure 2, d v 1 , v 3 v 4 = 2 3 .

  • We can expand the previous notation to reflect the metric representation of a subset of nodes V V S with respect to S as D V , S = d v l , S v l V .

    • For example, in Figure 2, D v 1 v 2 , v 3 v 4 = 2 3 1 2 . Note that the first pair (2,3) corresponds to v 1 and the second pair (1,2) corresponds to v 2 .

  • We define a partition = V 1 V 2 V t of V V as one with the following properties:

    • i = 1 t V i = V , and

    • for all i j , V i V j = Ø .

  • We define a refinement ' = V 1 V 2 V of a partition , denoted by ' r , as one that can be obtained from using the following rules:

    • For every node v j i = 1 t V i \ i = 1 V i , remove v j from the set in that contains it.

    • Optionally, for every set V in , replace V by a partition of V .

    • If there exists an empty set, remove it.

    • For example, in Figure 2, v 1 v 2 v 3 v 5 r v 1 v 2 v 3 v 4 v 5 .

  • We define an equivalence relation (and related notations) over set of same-length vectors D V \ V , V for some Ø V V as follows:

    • The set of equivalence classes, which forms a partition of D V \ V , V , is denoted by V \ V , V =

    • For example, in Figure 2, v 1 v 2 v 6 , v 3 v 5 = = 2 3 1 2 2 3 .

      • We declare two nodes v i , v j V \ V to be in the same equivalence class if d v i , V and d v j , V belong to the same equivalence class in V \ V , V = ; thus V \ V , V = also defines a partition into equivalence classes of V \ V .

      • The measure of the equivalence relation is defined as μ D V \ V , V = def min y V \ V , V = y .

      • If a set S is a k -antiresolving set then D V \ S , S defines a partition into equivalence classes of measure k .

    • For example, in Figure 2, μ D v 1 v 2 v 6 , v 3 v 5 = 1 and v 3 v 5 is a 1-antiresolving set.

Figure 2.

An example used in Section 3 for illustrating various notations.

Advertisement

4. Theoretical results

To understand graph resistance against privacy attacks, one needs to study the k -anonymity in greater details. Thus, we look into some computational problems related to this measure that were formalized and investigated in [16]. This section contains three problems from [16] and the respective algorithms to solve each problem efficiently. It is important to note that k -anonymity in its basic definition sets no limitation for the adversary, which means that an adversary can take control of as many nodes as she/he can. However, in real world there are many mechanisms designed solely to prevent such attacks and thus the chances of being caught are significantly high. This notion is the motivation behind several problems with respect to measuring the k -anonymity in a graph [17].

We now state the three problems for analyzing k -anonymity. Problem 1 simply checks to find a k-antiresolving set for the largest possible value of k. Problem 2 sets a restriction for number of nodes the adversary can control and attempts to find the largest possible value of k while minimizing the number of nodes that are compromised. Problem 3 introduces a version of the problem that attempts to address the trade-off between privacy threshold and number of compromised nodes.

Problem 1 (metric antidimension (ADIM)). Find a k-antiresolving subset of nodes S that maximizes k.

Problem 1 assumes there are no limitations on the number of attacker nodes, thus finding an absolute bound for privacy violation. Note that solution to Problem 1, denoted by kopt , shows that, given no bound on number of the nodes an adversary can control, it is feasible to uniquely re-identify kopt nodes with probability 1 k opt . The assumptions in Problem 1 are rarely plausible in practice; due to mechanisms present to counter such attacks, the more nodes the adversary controls, the higher the risk of being exposed. Thus, a limit on number of attacker nodes is necessary, which leads us to Problem 2.

Problem 2 ( k -metric antidimension ( ADIM k )). Given k, find a k -antiresolving set S such that (i) k > = k and, (ii) S is of minimum cardinality.

Problem 2 is an extension to Problem 1 that attempts to find the largest value of k while minimizing the number of attacker nodes. A solution to this problem asserts few interesting statements. For example, an adversary controlling l attacker nodes where < L opt k cannot uniquely re-identify any node in the network with a probability better than 1 k . However, using enough number of nodes ( L opt k ) one can re-establish such possibilities.

The third problem focuses on a trade-off between number of attacker nodes and the privacy violation probability. Given two measures k -anonymity and k -anonymity where k > k and < , it is easy to observe that k -anonymity measure provides a smaller privacy violation probability but also has lower tolerance for attacker nodes. The trade-off leads us to the third problem.

Problem 3 (k=-metric antidimension ( ADIM = k )) Given a positive integer k, find a k antiresolving subset of nodes S with minimum cardinality if such a subset exists.

Chatterjee et al. [16] investigated Problems 1–3 from a computational complexity perspective. The following theorems summarizes their finding on Problems 1–3. The non-trivial mathematical proofs for these theorems are unfortunately outside of the scope of this chapter; we strongly recommend readers who are interested in the proofs to read the original paper [16].

Theorem 1. [16]

  1. Both ADIM and ADIM k can be solved in O n 4 time.

  2. Both ADIM and ADIM k can also be solved in O n 4 log n k time with high probability.

Theorem 2. [16]

  1. ADIM = k is NP-Complete for any k in the range 1 k n ε where 0 ε < 1 2 is any arbitrary constant, even if the diameter of the input graph is 2.

  2. Assuming NP DTIME n log log n , there exists a universal constant δ > 0 such that ADIM = k does not admit 1 δ ln n approximation for any integer k in the range 1 k n ε for any constant 0 ε < 1 2 , even if the diameter of the input graph is 2.

  3. If k = n c for some constant c then ADIM = k can be solved in polynomial time.

Theorem 3. [16]

  1. ADIM = 1 admits 1 + ln n 1 approximation in O n 3 time.

  2. If G has at least one node of degree 1 then ADIM = 1 can be solved in O n 3 time.

  3. If G does not contain a cycle of 4 edges then ADIM = 1 can be solved in O n 3 time.

4.1 Algorithms

The following algorithms were devised in [16] to address Problems 1–3. It is important to note that ADIM can be solved in O n 5 time by repeatedly solving ADIM k for k = n 1 , n 2 , , 1 to find the largest obtainable value for k such that L opt k < . However, few modifications to Algorithm 1 directly result in O n 4 solution, which is shown in Algorithm 2.

Advertisement

5. Empirical results

In [18], DasGupta et al. investigated the resistance of 8 real-world network against active attacks with respect to the k -anonymity. All the networks under investigation were unweighted graphs and the direction of edges (if the network was directed) was ignored during the analysis. Table 1 contains the general information regarding these networks. Results for both ADIM and ADIM k were obtained by running Algorithm 1 on the networks, the return statements from Algorithm 1 being an exact solution to Problem 2. On the other hand, the exact solution for Problem 1 can be achieved by combining Algorithm 1 and binary search on k to find the largest value of k such that V opt k Ø [18].

Name Number of nodes Number of edges Description
Zachary Karate Club [20] 34 78 Network of friendship between 34 members of a karate club
San Juan Community [21] 75 144 Network for visiting relations between families living in farms in San Juan Sur, Costa Rica, 1948
Jazz Musician Network [22] 198 2842 A social network of jazz musicians
University Rovira i Virgili emails [23] 1133 10903 The network of email interchanges between members of university
Enron Email Data Set [24] 1088 1767 Enron email network
Email Eu Core [25] 986 24989 Emails from a large European research institute
UC Irvine College Message platform [26] 1896 59835 Messages on a Facebook-like platform at UC-Irvine
Hamsterster friendships [27] 1788 12476 Friendships between users of the website

Table 1.

List of 8 social networks studied in [18].

The results for both Problem 1 and Problem 2 for the networks in Table 1 are depicted in Table 2. Results in Table 2 provide the following interesting insights with respect to resistance against privacy attacks in real-world social networks [19].

  • All networks, with the exception of ”Enron Email Data” network, will have a significant percentage of their users compromised if an adversary gains control of only one node (varying between 2.6% of users compromised in ”University Rovira i Virgili emails” network to 26.5% of users compromised in ”Zachary Karate Club” network).

  • For all networks with the exception of ”Enron Email Data” network, the minimum privacy violation probability is notably higher than 0 (varying between 0.019 for the ”UC Irvine College Message platform” network to 0.25 for the ”Hamsterster friendships” network). The value for minimum privacy violation probability in ”Hamsterster friendships” network is notably higher compare to all other networks.

  • In comparison to other networks, the ”Zachary Karate Club” and the ”San Juan Community” have higher percentage of their users compromised if subjected to a privacy attack.

Name n k opt p opt = 1 k opt L opt k opt = L opt = k opt k opt n
Zachary Karate Club [20] 34 9 0.111 1 26.5%
San Juan Community [21] 75 7 0.143 1 9.3%
Jazz Musician Network [22] 198 12 0.084 1 6.0%
University Rovira i Virgili emails [23] 1133 29 0.035 1 2.6%
Enron Email Data Set [24] 1088 153 0.007 935 14.1%
Email Eu Core [25] 986 39 0.026 1 3.4%
UC Irvine College Message platform [26] 1896 55 0.019 1 2.9%
Hamsterster friendships [27] 1788 4 0.25 1 0.22%

Table 2.

Results for ADIM using Algorithm 1 [18].

n denotes the number of nodes in the social graph.


kopt is the largest value of k such that V opt k ø .


n depict the number of nodes, k opt is the largest value of k such that V opt k ø , and L opt k opt is minimum number of attacker nodes for corresponding k.

The exception network is the “Enron Email Data” network which due to a high value of L opt k is very resilient against an attack. An adversary needs to control at least 86% of the network to achieve a value of p opt = 0.007 , which is not feasible in practice. This interesting observation in the “Enron Email Data” network motivated further inspections in different values of k . As shown in Table 3, L opt k in the “Enron Email Data” network does not decrease significantly until k is set to a much smaller value compare to k opt , which further emphasizes that violating the privacy of the “Enron Email Data” network is not guaranteed in practice. The authors in [18] also investigated the k -anonymity measure in synthetic networks constructed based on both Erdös-Rényi random graphs and Barabási-Albert scale-free networks. We refer the reader to the original paper for more information.

k 4 5 10 20 40 60 100 120 153
Enron Email Data Set p k = 1 k 0.25 0.2 0.1 0.05 0.025 0.017 0.01 0.009 0.007
L opt k 1 334 463 567 683 842 935 935 935

Table 3.

L opt k values recorded for k > 1 for the “Enron Email Data” network [18]. The values shown are subject to L opt k L opt k 1 .

Advertisement

6. Conclusions

Since their emergence about a decade ago, social networks have rapidly grown and infiltrated every aspect of our daily lives. With rapidly expanding reliance on their platforms, social networks like Facebook and Twitter are becoming a goldmine of personal information and user behavior data which makes the study of these networks of prime importance. The valuable information stored within these platforms makes them the target of malicious entities which try to compromise the privacy of the users which may further lead to unwanted disclosure of the sensitive attributes of the network.

In this chapter, we have reviewed a novel privacy measure that quantifies the resistance of a large social network against a privacy violating attack. We reviewed some efficient algorithms to compute this measure in social graph and revisited the privacy violation properties in 8 real-world networks. The current theoretical and empirical results for k -anonymity pave the way for further investigation of this measure, as well as addressing its shortcomings and limitations.

Advertisement

Acknowledgments

We thank Ismael G. Yero for useful discussions. This research was partially supported by NSF grants IIS-1160995 and IIS-1814931.

References

  1. 1. Trujillo-Rasua R, Yero IG. k-metric antidimension: A privacy measure for social graphs. Information Sciences. 2016;328:403-417
  2. 2. Bar-Yehuda R, Chor B, Kushilevitz E, Orlitsky A. Privacy, additional information and communication. IEEE Transactions on Information Theory. 1993;39(6):1930-1943
  3. 3. Comi M, DasGupta B, Schapira M, Srinivasan V. On communication protocols that compute almost privately. Theoretical Computer Science. 2012;457:45-58
  4. 4. Feigenbaum J, Jaggard AD, Schapira M. Approximate privacy: Foundations and quantification. In: Proceedings of the 11th ACM Conference on Electronic Commerce; ACM. 2010. pp. 167-178
  5. 5. Kushelvitz E. Privacy and communication complexity. SIAM Journal on Discrete Mathematics. 1992;5(2):273-284
  6. 6. Yao AC. Some complexity questions related to distributive computing (preliminary report). In: Proceedings of the 11th Annual ACM Symposium on Theory of Computing; ACM. 1979. pp. 209-213
  7. 7. Newman ME. The structure and function of complex networks. SIAM Review. 2003;45(2):167-256
  8. 8. Albert R, Barabási AL. Statistical mechanics of complex networks. Reviews of Modern Physics. 2002;74(1):47
  9. 9. Zhou B, Pei J, Luk W. A brief survey on anonymization techniques for privacy preserving publishing of social network data. ACM SIGKDD Explorations Newsletter. 2008;10(2):12-22
  10. 10. Backstrom L, Dwork C, Kleinberg J. Wherefore art thou r3579x?: Anonymized social networks, hidden patterns, and structural steganography. In: Proceedings of the 16th International Conference on World Wide Web; ACM. 2007. pp. 181-190
  11. 11. Netter M, Herbst S, Pernul G. Analyzing privacy in social networks—An interdisciplinary approach. In: 2011 IEEE 3rd International Conference on Privacy, Security, Risk and Trust and 2011 IEEE 3rd International Conference on Social Computing; IEEE. 2011. pp. 1327-1334
  12. 12. Wu X, Ying X, Liu K, Chen L. A survey of privacy-preservation of graphs and social networks. In: Managing and Mining Graph Data; Springer, Boston, MA. 2010. pp. 421-453
  13. 13. Slater PJ. Leaves of trees. Congressus Numerantium. 1975;14(549-559):37
  14. 14. Harary F, Melter RA. On the metric dimension of a graph. Ars Combinatoria. 1976;2(191-195):1
  15. 15. Yu H, Gibbons PB, Kaminsky M, Xiao F. Sybillimit: A near-optimal social network defense against sybil attacks. In: 2008 IEEE Symposium on Security and Privacy; IEEE. 2008. pp. 3-17
  16. 16. Chatterjee T, DasGupta B, Mobasheri N, Srinivasan V, Yero IG. On the computational complexities of three problems related to a privacy measure for large networks under active attack. Theoretical Computer Science. 2019;775:53-67
  17. 17. Leiserson CE, Rivest RL, Cormen TH, Stein C. Introduction to Algorithms. Cambridge, MA: MIT Press; 2001
  18. 18. DasGupta B, Mobasheri N, Yero IG. On analyzing and evaluating privacy measures for social networks under active attack. Information Sciences. 2019;473:87-100
  19. 19. Johnson DS. Approximation algorithms for combinatorial problems. Journal of Computer and System Sciences. 1974;9(3):256-278
  20. 20. Zachary WW. An information flow model for conflict and fission in small groups. Journal of Anthropological Research. 1977;33(4):452-473
  21. 21. Loomis CP, Morales JO, Clifford RA, Leonard OE. Turrialba: Social Systems and the Introduction of Change. Glencoe, IL: Free Press; 1953
  22. 22. Gleiser PM, Danon L. Community structure in jazz. Advances in Complex Systems. 2003;6(04):565-573
  23. 23. Guimera R, Danon L, Diaz-Guilera A, Giralt F, Arenas A. Self-similar community structure in a network of human interactions. Physical Review E. 2003;68(6):065103
  24. 24. Enron email network. Available from: UC Berkeley Enron Email Analysis website http://bailando.sims.berkeley.edu/enron_email.html
  25. 25. Paranjape A, Benson AR, Leskovec J. Motifs in temporal networks. In: Proceedings of the 10th ACM International Conference on Web Search and Data Mining; ACM. 2017. pp. 601-610
  26. 26. Panzarasa P, Opsahl T, Carley KM. Patterns and dynamics of users’ behavior and interaction: Network analysis of an online community. Journal of the American Society for Information Science and Technology. 2009;60(5):911-932
  27. 27. Hamsterster friendships network dataset–KONECT, April 2017. Available from: http://konect.uni-koblenz.de/networks/petster-friendships-hamster)

Written By

Tanima Chatterjee, Nasim Mobasheri and Bhaskar DasGupta

Submitted: 06 August 2019 Reviewed: 19 December 2019 Published: 24 January 2020