Open access peer-reviewed chapter

Opinion Dynamics and the Inevitability of a Polarised and Homophilic Society

Written By

Rafael Prieto Curiel

Submitted: 29 October 2020 Reviewed: 02 March 2021 Published: 13 April 2021

DOI: 10.5772/intechopen.96989

From the Edited Volume

Theory of Complexity - Definitions, Models, and Applications

Edited by Ricardo López-Ruiz

Chapter metrics overview

431 Chapter Downloads

View Full Metrics

Abstract

A polarised society is frequently observed among ideological extremes, despite individual and collective efforts to reach a consensual opinion. Human factors, such as the tendency to interact with similar people and the reinforcement of such homophilic interactions or the selective exposure and assimilation to distinct views are some of the mechanisms why opinions might evolve into a more divergent distribution. A complex model in which individuals are exposed to alternating waves of propaganda which fully support different extreme views is considered here within an opinion dynamics model. People exposed to different extreme narratives adopt and share them with their peers based on the persuasiveness of the propaganda and are mixed with their previous opinions based on the volatility of opinions to form a new individual view. Social networks help capture elements such as homophily, whilst persuasiveness and memory capture bias assimilation and the exposure to ideas inside and outside echo chambers. The social levels of homophily and polarisation after iterations of people being exposed to extreme narratives define distinct trajectories of society becoming more or less homophilic and reaching extremism or consensus. There is extreme sensitivity to the parameters so that a small perturbation to the persuasiveness or the memory of a network in which consensus is reached could lead to the polarisation of opinions, but there is also unpredictability of the system since even under the same starting point, a society could follow substantially different trajectories and end with a consensual opinion or with extreme polarising views.

Keywords

  • Opinion dynamics
  • polarisation
  • homophily
  • consensus
  • diffusion
  • interaction network

1. Introduction

Modelling some aspects of our society is challenging at an individual and at a collective level. Every idea, every human feeling and every interaction is so unique that measuring and modelling human constructs such as freedom, love, traditions, friendship, power, or fear is defying from its basis. Obtaining a generalisation or an abstraction, such as physical laws, which apply at a social level is frequently not feasible. Two equal drops of water will act the same under similar circumstances, but no two individuals are so similar as to ensure they feel the same, think the same or react the same to some circumstances. Social settings, as opposed to physical observed ones, often lack of measuring instruments and units, it is almost impossible to repeat experiments and so transforming our knowledge about society into simple, absolute, and universal descriptions is often unimaginable [1]. Social models are inevitably incomplete and inaccurate, because of scientific limitations and a lack of data [2] and because conventional scientific approaches cannot be applied to many of the problems faced by our society [3]. Furthermore, just a few years ago it was impossible to use the right amount of data or to model more than just a few aspects of the individuals, but today we are capable of simulating large human systems [4] with more complex interactions between its members and its environment [5]; to understand the emergence of crowd behaviour in different situations and to challenge and, in some cases, to measure, some of the theories which are frequently applied across some scientific fields [6]. Models of collective human behaviour have gained interest as the need for them grows, their results get more and more applied in policy and decision-making and their implications are spread throughout more widely.

Models of social behaviour are complex. Many features observed at a social level are an emergent behaviour that results from interactions at a personal level and feedbacks between society and its individuals. Social behaviours are the result of collective individual actions. People adapt rapidly to new circumstances, transforming society as a whole on that process, for instance, by making it normal to maintain some physical distance with others or by wearing a facemask during the COVID-19 pandemic, but some of these social features synchronise our behaviours as well, by the constant feedback others provide.

Modelling society usually requires a substantial level of simplification at the microscopic, individual level in the hope to resemble the macroscopic, social behaviour [7]. The mathematical approach is usually to study the emergent collective patterns when thousands or millions of people -or events- are considered. For instance, a crime might be regarded as a point on a map, a friendship could be considered as a link in a network, or a driver could be modelled by its position and its speed; however, these simplifications made within a social context have helped us to understand the emergent patterns of criminal hotspots [8], the small-world phenomena observed in many social networks [9] or the formation of traffic jams despite efforts from drivers to avoid them [10].

Opinions and the ways they are updated is a complex social system. In general, individuals have an opinion about a specific topic, which is somehow updated when they are confronted with other ideas. Usually, a person gains some confidence in their views when they are reinforced by exposure to similar ideas or challenges their beliefs when they are exposed to different opinions. The exposure to distinct views is a social process and therefore, updating beliefs is mostly a social process as well, which happens perhaps during a simple conversation with others, when listening to what others say on the news, or what they publish on social media. And, as with other complex social systems, individuals transform their society with their opinion, but society transforms individuals as well. There are feedbacks between individual opinions and their collective perceptions and ideas.

1.1 Polarised opinions

Polarisation and the way it emerges is one of the key questions in opinion dynamics models [11]. An increasingly polarised society is observed in attitudes towards the COVID-19 pandemic, views in favour or against a vaccine [12], the consumption of media outlets, opinions on social media and many more. Increased exposure to ideas within an homogeneous community intensifies their tendency to be credulous, whether it is to scientific evidence, unsubstantiated rumours, inconclusive evidence or even fake news. Polarised opinions might foster confirmation bias, so that people with more extreme opinions tend to become more certain in their beliefs [13] and therefore, it contributes to the proliferation of fake news, whereby once an idea is adopted, is rarely corrected [14].

Frequently, individuals want to persuade others -even unintentionally- to adopt an idea and so there are active efforts to reach a consensual opinion. observing opinion dynamics only at a global scale and ignoring individual dynamics often lead opinions to a consensus state [15], in a similar way in which temperature differences tend to vanish. Yet, two or more contrasting ideas might be highly popular, even if all individual efforts try to reach a consensual opinion. Polarisation, or even fragmentation among many opinions, might be one of the emergent states of collective opinion dynamics, where contrasting ideas might co-exist as a steady state in a society.

Human factors such as the frequency at which we form ties with similar people (homophily), the tendency of having similar opinions as a result of social interactions (social influence), the fact that when presented with mixed evidence, individuals might perceive it as positive feedback for their initial position (biased assimilation), or interpreting the acceptance of an idea as reinforcement when sharing an opinion in a social environment (social feedback) are some of the causal mechanisms why the process in which ideas are updated might be polarising, meaning that final opinions are more divergent than initial opinions [11, 16].

1.2 Homophilic opinions

Usually, a person interacts with others of similar age, income or other sociodemographic, behavioural, and intrapersonal characteristics, including opinions or views on a certain topic [17, 18]. If a population has polarised opinions, it means that, at a global level, there is a high probability that when two individuals are randomly picked, they share extreme different views. However, little is known with respect to the actual interactions. Individuals from a highly polarised society could almost always interact with people who share similar views if polarised bubbles rarely interact with each other. Yet, a different opinion process within the same polarised society is observed when people frequently interact with others who shared opposite views. On a polarised population, opinions have high homophily if most of the individuals interact with people with similar views, and opinions are not homophilic if people with different views interact frequently. See Figure 1, where opinions are represented by the intensity of the colour of a node.

Figure 1.

Opinions (represented by the different colours of the nodes) are shared between individuals who interact (if there is a link between the nodes). Different states in which opinions are distributed show a small polarisation (left part, where most individuals have similar views) or high polarisation (right part, where opinions are split in half) and might show low homophily (bottom part, where opposite opinions are frequently shared among interacting individuals) or high homophily (top part, where opposite opinions are rarely shared among connected nodes).

Polarisation between two opinions -or fragmentation among many- is detected when opinions are observed at a global scale, but to detect if opinions are homophilic, more local information with respect to the interactions is needed. For instance, in the 2016 UK referendum to remain in the European Union, 52% of the votes were to leave (a highly polarised election), but at a more local level, the area which voted most heavily in favour of one of the options was Gibraltar (where nearly 96% of the votes were to remain), whereas in Watford results were evenly distributed among leave and remain. Thus, Gibraltar had the lowest polarisation, where there was a near consensus for one of the options, but Watford had the highest polarisation between the leave and remain options. In Watford, however, with their highly polarised election outcome, interactions could still happen very frequently between people with similar views, if the opinion sharing process is highly homophilic and there are little interactions between the two voter groups.

A slightly polarised society does not have homophilic opinions, but a polarised society might have homophilic opinions, or not, depending on how individuals interact and the opinion profile. The relevance of opinion homophily stems from the fact that in a highly polarised population, most individuals might not be aware that so many people with different views even exist, whereas in a polarised society with little levels of homophily, encounters between people with opposite views happen frequently. Furthermore, a highly polarised society might be a steady state of some opinion dynamics but given the right circumstances (parameters) that state could be highly homophilic or a state in which most individuals interact frequently with people with different views.

Social media and other technological changes could increase exposure to diverse perspectives [19], but at the same time facilitate some mechanisms, such as the creation of links or friendships in the network, filter algorithms and rank information which may accelerate the formation of homophilic communities [16, 20]. People frequently aggregate in groups of interest, and those existent communities frequently adopt narratives from different topics, reinforcing polarisation across distinct themes, for instance, political ideology and perceptions with respect to the COVID-19 pandemic [21, 22]. People interacting with homogeneous communities tend to grow more extreme opinions and become more certain in their beliefs [13] which can favour the spread of misinformation from partisan media and increase animosity within the population [23]. For COVID-19, for example, most of the misinformation detected involves reconfigurations, where existing (often true) facts are adjusted to fit different narratives [24] which are then reproduced by large homophilic groups as facts. Massive misinformation is becoming one of the main threats to our society [14, 25, 26] which might be fostered by an increasingly homophilic opinion dynamic process and a polarised society.

Advertisement

2. Modelling opinions and its dynamics

Opinion formation has been studied from many angles and different mathematical techniques, including mean-field theory and kinetic models of opinion formation [27], or by agents on a social network. Individual opinions on a certain topic are usually modelled as a single-valued number contained in some closed interval which represents extreme (opposing) opinions, for example, left–right leaning voters [28], the level of production of an employee in a plant [7] or perceptions between security and insecurity [29]. The process of opinion updating then is modelled as the result of interaction with other views, a process of self-thinking, some memory loss, or external factors. Interactions between individuals are usually modelled on some social structure, such as a network, considering some spatial proximity, or considering some social aspects, such as the level of influence of one individual to others [30]. A long-term, steady distribution of opinions is usually obtained, either as an analytical solution to some differential equations or through simulations, which reveals among others, the formation of opinion clusters, political segregation [31], vaccine hesitancy [12], the use of certain tools [32], the spread of fear of crime more as a result of opinion dynamics than crime itself [29] or even the diffusion of fake news [14].

2.1 The key ingredients in opinion dynamics models

There are four ingredients in opinion dynamics models [30, 31]:

  1. Individual opinions - Modelled usually as a number, say sit11 for individual i at time t, based on the extremes of an interval, 1 and +1, which are identified as opposing opinions, for instance, the levels of support or opposition for an idea, perceptions of security or insecurity, or left–right leaning political views. Other approaches include multi-dimensional views or discrete opinions.

  2. External or individual forces - External forces such as exposure to news sources [33], or events such as suffering a crime [29] or an accident, and individual forces, such as memory loss may cause an individual to update their views about certain topics.

  3. Updating process as interactions with others - Exposure to different ideas is frequently considered as the updating mechanism of opinions. Frequently, through interactions with others, person i finds a distinct opinion, sjt, and might update their own views according to some function based on their opinion sit and the “new” one, as sit+1=fsitsjt, where usually the function f is assumed to get opinion si closer to sj as a result of some consensus effort. Interactions are frequently modelled on some network, where two connected nodes (individuals) share opinions with (some) of its adjacent nodes. The network structure and whether it is directional thus play a role in the updating process.

  4. Metrics - From the collective opinions, or “opinion profile”, St=s1ts2tsNt, usually its mean S¯t and other metrics are frequently analysed, perhaps dividing by some population groups or node attributes, usually as a long-run behaviour of the dynamics.

Although some analytical results are available [27, 28], the dynamics are usually simulated on a network. The technique allows considering individual aspects, such as assertiveness, persuasiveness, supportiveness, extremists or opinion volatility [28, 31, 34, 35].

2.1.1 Measuring polarisation and homophily

A group might reach an agreement or consensus on some opinions if the majority of the individuals share similar views, whereas a group might be polarised if opinions are divergent, with extremism being the state in which opinions are mostly concentrated among the two extremes. One way to measure the level of polarisation in a population by the variance of the opinion profile, where a large variance means a more polarised society and a small variance means consensus. Formally, the polarisation Φ of an opinion profile S is given by

ΦS=VarS=1Ni=1Nsis¯2.E1

For opinions bounded inside the 1+1 interval, very small populations could have ΦS values larger than 1, but for a population with more than 100 individuals, Φ<1.01 and so for large enough populations, it might be considered that Φ obtains values in the 0 (if there is consensus) to 1 (if there is extremism) interval. If a random opinion is sampled for each individual sk01+1, a 95% interval of the polarisation is ΦS0.327,0.338 and therefore, the distribution of opinions S is classified as consensus if ΦS0; consensual if ΦS<1/3; homogeneous if ΦS1/3, so that the polarisation is similar to a random distribution of opinions; polarising if ΦS>1/3 and as extremism if ΦS1 (Figure 2).

Figure 2.

Classification of collective opinions according to their distribution (represented as the height of each colour bar), from consensus (left) where ΦS0, to extremism (right) where ΦS1.

The process of opinion dynamics has a high level of homophily if most of the interactions happen between individuals of similar views and has a low level otherwise. Formally, if Ai are the adjacent nodes of i, then the average opinion distance Di experienced by i is given by

Di=1dijAisisj,E2

where di is the degree of i, so that Di gives the average opinion distance from a node to its adjacent neighbours (and define Di=0 if i has no neighbours). The opinion homophily ΛS is defined as

ΛS=11Ni=1NDi,E3

a metric suited for measuring homophily based on a continuous node attribute, such as opinions, with high values if individuals interact with others of similar views and has lower values (possibly negative) if interactions are more frequent between individuals of very different views. Notice that the metric depends on the opinion profile but also on the network topology. On a linear network, for instance, where all nodes have two neighbours, except for the two extremes, opinions in the 1+1 interval are highly polarised (or have extremism) if half of the individuals have 1 and the other half have +1 as opinions, and ΦS1. Such opinion profile is not homophilic with alternating opinions, S=+11+111, and ΛS=1, but a high level of homophily is observed when opposite opinions are located on the two extremes of the network, so S=+1+1+1111, in which case, only the two neighbouring individuals located at the boundary between the opinion groups have an interaction with a person who has a distinct opinion than their own and so ΛS=12/N1. The expected opinion distance between two randomly selected opinions is 2/3, from which ΛS1 means preferential interactions between individuals of similar views; ΛS>1/3 means homophilic interactions; ΛS1/3 means random interactions; and ΛS<1/3 means discouraged interactions between people of similar views.

2.2 An opinion dynamics model

Consider a diffusion process of opinions on a network, where the four key ingredients (individual opinions, updating process due to individual or external forces, interactions, and the corresponding metrics) are defined as follows. Initially, N individuals have a randomly-distributed opinion si01+1, which represent two extreme views on a certain topic. As external forces in the dynamics, we consider exposure from the individuals to some “propaganda” in favour of one of the views. Each time step, a randomly selected group of 1% of the individuals are exposed, in an alternative sequence, to some supporting mechanism in favour of any of the two views. It is assumed that the views fully favour one of the two extreme opinions, so that they are considered as v1=+1, the first force which favours view +1, and then v2=1, which supports view 1 and so on, with vk=1k. As opinion dynamics, individuals who are exposed to any of the views (vk=±1) decide whether to “trust” or to “dismiss” the views based on their current opinion and on the persuasiveness of the views θ, where θ01 is a parameter which captures how seductive are the views (where large values of θ mean that views are more seductive and individuals are more inclined to trust them and smaller values mean that views are likely to be dismissed). Due to confirmation bias, individuals with opinion closer to +1 are more likely to trust a vk=+1 propaganda, as it confirms their views and more likely to ignore vl=1 propaganda for the same reason. To capture confirmation bias, it is assumed that person i with opinion si trusts view vk with probability

1+siθ2ifvk=+1,andE4
1siθ2ifvk=1.E5

With this condition, a person with opinion si=0.4 trusts view vk=+1 with probability 0.7θ, but trust view vj=1 with probability 0.3θ, for some θ which depends on how seductive the corresponding propaganda is, so that individuals are more inclined to trust views which favour their own opinions. Individuals who are seduced by any propaganda, share it with all their contacts as an active effort to persuade them, say by sharing or posting the views on social media. Individuals who dismiss propaganda, make a permanent decision to ignore it, do not update their views and do not share it with their contacts. Thus, when individuals are exposed for the first time to the views, they make a permanent choice whether to accept it (and update their views and share it) or to ignore it (and do nothing). Therefore, after 1% of the individuals are first exposed to propaganda, some individuals trust and share it with their contacts and so on until no one is exposed for the first time to that propaganda. It is assumed that the sharing mechanism (social media, say) works faster than the creation of new propaganda, so that by the time a new view vk+1 is created and distributed, the dynamics of the previous (opposing) propaganda vk has finished. Each wave of propaganda follows a similar diffusion process as the SIR model used in epidemics, where a small percentage of the individuals are initially exposed. The “infection” (the propaganda) passes through individuals, and the distribution of the recovered individuals is observed [12, 32].

Individuals who accept some propaganda at time t update their views according to the volatility of their opinions, μ01, such that individuals who accepted view vk update their opinion between t and t+1 according to

sit+1=μvk+1μsit,E6

so that if opinions are volatile (that is, individuals easily update their views, with a large value of μ), then most of their opinion at time t+1 depends only on the views of the propaganda they accepted, but with more rigid opinions (individuals change little their past views, with small μ), then the impact of propaganda becomes small. For example, for view vk=+1 and with volatility μ=0.5, a person with opinion sit=0.8 updates their view to sit+1=0.9 if they accept vk, whereas a person with view sjt=0.8 updates their view to sjt+1=0.1, meaning that a person with very different views from certain propaganda is more difficult to convince, but once convinced, the impact in their opinion is larger (Figure 3).

Figure 3.

Probability of trusting any of the two types of propaganda, vk=±1, represented as the two triangles on the left, based on the individual opinions, represented as the colour of the nodes and based on how seductive are two views, θ. Propaganda which supports the views of a person is more likely to be trusted by the person, but still, all propaganda has a certain level of persuasiveness, θ (the maximum height of the triangles). The impact of trusting some propaganda on individual opinions is higher if opinions are more volatile, that is, higher values of μ and has little impact if opinions are more rigid, which is shown as a slight colour change for rigid opinions and a drastic colour change for volatile opinions on the right.

A crucial element in the opinion models is the way in which interactions between individuals are structured. Society has opinion clusters -for example, a social media group in which information flows easily-, has opinion hubs -influencers, for example, who reach a large population- is likely to be strongly connected with many shortcuts between people who are not directly connected and therefore, the network in which propaganda is shared is also a key element in the model. Four network topologies with N=2,000 nodes are analysed here: (1) a fully connected network; (2) a proximity network (nodes are located randomly on a square and pairs at a distance smaller than a certain threshold are connected); (3) a small-world network and (4) a scale-free network.

The model has two parameters: the persuasiveness θ, which is assumed to be the same for all propaganda, and the volatility of opinions μ, which is assumed to be the same for all individuals. For some values of θ and μ and for some randomly assigned initial opinions, individuals are exposed to a total of 128 waves of propaganda (64 supporting each view).

Advertisement

3. Results

The trajectory of a society in terms of its polarisation Φ and its homophily Λ after each round of propaganda shows that for different network topologies, opinion dynamics yields different states. On a fully connected network, in which there is no relevant network structure, each round of propaganda reaches all individuals (if at least one person trusted it) and seduce some of them based only on their current opinion (Figure 4). After some propaganda rounds, most individuals have an opinion either close to +1 or to 1 so that polarisation is eventually maximum. Also, since all individuals interact with others, the homophily is reduced when polarisation increases. However, on some other topologies, there is a different impact of each wave of propaganda, particularly after repetition. On a proximity network, most rounds of propaganda tend to increase the level of polarisation, but after repetition, most of the propaganda rounds also increase the level of homophily. Thus, after many rounds, the network has regions with similar (extreme) views and therefore, at a local level, nodes are mostly connected to others with similar views. On a small-world network and a scale-free network, most rounds of propaganda increase the level of polarisation, but the presence of network shortcuts and hubs reduce considerably the homophily so that most of the trajectories are less homophilic than its initial levels after 128 rounds of propaganda.

Figure 4.

Trajectories of social polarisation (horizontal axis) and homophily (vertical axis) simulated in four different network topologies. Each realisation for some persuasiveness, θ and opinion volatility μ is marked with a curve. All curves or realisations have a nearby starting point, which marks the polarisation and homophily of a random distribution of opinions. For each topology, the four quadrants with a higher (or lower) polarisation and a higher (or lower) homophily are coloured and the three trajectories with the highest and lowest polarisation and the highest homophily are marked with thick curves.

For some of the trajectories, it is observed that the first few rounds of propaganda increase the polarisation and decrease the homophily. After many rounds of propaganda, the level of homophily might increase, indicating the formation of clusters of nodes with similar opinions, particularly on a proximity network. In some cases, polarisation might be decreased, but only after homophily has decreased (and not the other way around), meaning that first, the observed changes in opinion dynamics happen at a local level and then, they might be perceived at a global scale. Notice, however, that very few trajectories reach less polarisation than their starting point. Thus, propaganda or similar external forces tend to increase polarisation and frequently will produce a higher level of polarisation than the one observed with a random distribution of opinions.

3.1 Parameter space

The observed levels of polarisation and homophily depend on the persuasiveness of the propaganda θ, and the opinion volatility μ. On a proximity network, for instance, with highly persuasive propaganda (θ1) and volatile opinions (μ1) after only a few rounds of propaganda, there is a highly polarised society, with highly homophilic interactions. However, if propaganda is not as seductive or if individuals do not update their views easily, it takes several rounds of propaganda to observe a polarised society (Figure 5).

Figure 5.

Observed levels of polarisation (left) and homophily (middle) on a proximity network according to some values of the persuasiveness of propaganda θ (horizontal axis) and the volatility of opinions μ (vertical axis) after 8, 32 and 128 rounds of propaganda. Higher levels of polarisation and homophily are darker, representing extreme views and a homophilic society respectively, and lower levels of polarisation and homophily are lighter, representing consensus and frequent exchanges between people with different views. For the same values of θ=0.85 and μ=0.35, with the same initial (random) opinions, 250 realisations of the dynamics follow different trajectories (right), where the levels of polarisation and homophily after 8, 32 and 128 rounds of propaganda are highlighted.

For some values of θ and μ, there is extreme sensitivity to the parameters. On a proximity network, with higher values of the persuasiveness of propaganda θ, society might be alternatively highly polarised or close to a consensus after many rounds of propaganda with very small changes in the two parameters. Furthermore, even with the same initial opinions and with the same values of θ and μ, society might reach very different levels of polarisation and homophily (right part of Figure 5). Individuals exposed to propaganda are randomly picked and according to their opinion, they might be seduced by it and share it with their contacts, or ignore it, thus, altering the outcome after that round of propaganda. With only a few waves of propaganda, the outcome might be similar, but those small changes are cumulative and so after many rounds, the outcome might be a society close to extremism or even close to consensus, even if the starting point is the same.

The first rounds of propaganda decrease the homophily of society so that people with some extreme view have frequent interactions with others with different views. As the number of propaganda rounds evolves, opinion clusters are formed, and so interactions become more and more frequent between individuals with similar views. Thus, even if at a global scale the level of polarisation is increasing, after many rounds of propaganda, people might be less aware of the existence and abundance of different views. Extreme opinions might become more frequent because of propaganda. A similar -although less pronounced- polarising and homophilic society might be frequently observed on a scale-free and a small-world network, although the presence of hubs and shortcuts in the network reduces the creation of opinion clusters (Figure 6).

Figure 6.

Observed levels of polarisation (top) and homophily (bottom) according to some values of the persuasiveness of propaganda θ (horizontal axis) and the volatility of opinions μ (vertical axis) after 128 rounds of propaganda. Four network topologies are considered, a fully-connected, a proximity, a small-world and a scale-free network from left to right.

The fully-connected network helps to observe the dynamics of opinions without any relevant network structure. With some level of persuasiveness θ, and opinion volatility μ, society eventually reach polarisation. With more rounds of propaganda, polarisation increases up to extremism, and only with no persuasiveness (θ=0) or no volatility (μ=0) society remains without extremism. However, for different network topologies, propaganda might have a different impact. Particularly in the case of a proximity network (with high values of θ) and in the case of a scale-free network (with medium values of θ) propaganda might increase homophily and in some cases, reduce polarisation.

Advertisement

4. Conclusions

Social models are a simplification of very complex processes which happen at an individual level but might be able to capture some collective emergent aspects. In terms of opinion dynamics, modelling individual views as a number, simplifying external forces such as propaganda, simulating interactions and a process of opinion updating let us detect emergent patterns, including an increase in the global levels of polarisation and the frequency of homophilic interactions between individuals.

The network structure plays a significant role, as the emergence of homophilic clusters which reinforce their opinions is detected, particularly on a network where there is a large distance between nodes, such as a proximity network.

The observed results in terms of the trajectories and the observed levels of polarisation and homophily after many rounds of propaganda show that there might be a high sensitivity concerning the parameters. Two simulations under the same network structure and even the same initial opinions and parameters might follow different trajectories and end with substantially distinct levels of homophily and polarisation. The model initially exposes 1% of the population to some propaganda and depending on who is exposed, the dynamic changes and eventually reach very different states. For some regions in the parameter space, there is unpredictability in the state in which society will be after propaganda.

In the simulated networks, the average degree is 7.6 for the proximity network and 10 for the small-world and the scale-free network. The intensity of interactions, measured as the degree of the nodes, accelerates or frictions the diffusion of propaganda, and thus, accelerates of frictions polarisation and homophily as well. A less-connected society is more prone to the creation of homophilic clusters.

4.1 What is different between a highly polarised society and one with little polarisation

On a highly polarised society, individuals become “immune” to propaganda which does not support their views and dismiss it easily, whereas propaganda which supports their views is confirmation of their beliefs and takes individuals into even more extreme and plarised views. On a polarised society, even with little levels of homophily (meaning that individuals are likely to be exposed to both types of propaganda), individuals are eventually too biased in favour of any of the extreme views, which becomes too difficult to change.

On a society with little levels of polarisation, views could either have a consensus on one of the two extremes, in which case, propaganda in favour of any of the opinions has little impact. This case happens when one of the two views becomes dominant at early stages, in which case, individuals also become “immune” to propaganda (and since the first propaganda they are exposed is +1, that view is slightly more likely to become dominant in the long run).

However, the most frequently observed consensus is one in which barely anyone has extreme views, propaganda in favour of the two views flows between most individuals and they update their opinion accordingly, but not enough to reject further waves of propaganda and keep updating their opinion.

Advertisement

Acknowledgments

This chapter was completed with support from the PEAK Urban programme, funded by UKRI’s Global Challenge Research Fund, Grant Ref: ES/P011055/1.

Advertisement

Conflict of interest

The author declare no conflict of interest.

References

  1. 1. Jeffrey H Johnson. The future of the social sciences and humanities in the science of complex systems. Innovation–The European Journal of Social Science Research, 23(2):115–134, 2010
  2. 2. Julian CR Hunt, Yulia Timoshkina, Peter J Baudains, and Steven R Bishop. System dynamics applied to operations and policy decisions. European Review, 20(3):324–342, 2012
  3. 3. Jeffrey H Johnson. The “can you trust it?” problem of simulation science in the design of socio-technical systems. Complexity, 6(2):34–40, 2000
  4. 4. Eric Bonabeau. Agent-based modeling: Methods and techniques for simulating human systems. Proceedings of the National Academy of Sciences, 99(suppl 3):7280–7287, 2002
  5. 5. Xiaoshan Pan, Charles S Han, Ken Dauber, and Kincho H Law. A multi-agent based framework for the simulation of human and social behaviors during emergency evacuations. Ai & Society, 22(2):113–132, 2007
  6. 6. Shane D Johnson and Elizabeth R Groff. Strengthening theoretical testing in criminology using agent-based modeling. Journal of Research in Crime and Delinquency, 51(4):509–525, 2014
  7. 7. Serge Galam, Yuval Gefen, and Yonathan Shapir. Sociophysics: A new approach of sociological collective behaviour. i. mean-behaviour description of a strike. Journal of Mathematical Sociology, 9(1):1–13, 1982
  8. 8. Martin Short, Jeffrey Brantingham, Andrea Bertozzi, and George Tita. Dissipation and displacement of hotspots in reaction-diffusion models of crime. Proceedings of the National Academy of Sciences, 107(1):3961–3965, 2010
  9. 9. Duncan J Watts and Steven H Strogatz. Collective dynamics of ‘small-world’ networks. Nature, 393(6684):440–442, 1998
  10. 10. Dirk Helbing, Dirk Brockmann, Thomas Chadefaux, Karsten Donnay, Ulf Blanke, Olivia Woolley-Meza, Mehdi Moussaid, Anders Johansson, Jens Krause, Sebastian Schutte, and Matjaž Perc. Saving human lives: What complexity science and information systems can contribute. Journal of Statistical Physics, 158(3):735–781, 2015
  11. 11. Gérard Weisbuch, Guillaume Deffuant, Frédéric Amblard, and Jean-Pierre Nadal. Meet, discuss, and segregate! Complexity, 7(3):55–63, 2002
  12. 12. Rafael Prieto Curiel and Humberto González Ramírez (2021). Vaccination strategies against COVID-19 and the diffusion of anti-vaccination views. Scientific Reports, 11(1)
  13. 13. Bill Bishop. The big sort: Why the clustering of like-minded America is tearing us apart. Houghton Mifflin Harcourt, 2009
  14. 14. Michela Del Vicario, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H Eugene Stanley, and Walter Quattrociocchi. The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3):554–559, 2016
  15. 15. Vítor V. Vasconcelos, Simon A Levin, and Flávio L Pinheiro. Consensus and polarization in competing complex contagion processes. Journal of the Royal Society Interface, 16(155):20190196, 2019
  16. 16. Kazutoshi Sasahara, Wen Chen, Hao Peng, Giovanni Luca Ciampaglia, Alessandro Flammini, and Filippo Menczer. On the inevitability of online echo chambers. arXiv preprint arXiv:1905.03919, 2019
  17. 17. Mark EJ Newman. The structure and function of complex networks. SIAM review, 45(2):167–256, 2003
  18. 18. Miller McPherson, Lynn Smith-Lovin, and James M Cook. Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27(1):415–444, 2001
  19. 19. Seth Flaxman, Sharad Goel, and Justin M Rao. Filter bubbles, echo chambers, and online news consumption. Public Opinion Quarterly, 80(S1):298–320, 2016
  20. 20. Gilat Levy and Ronny Razin. Social media and political polarisation. LSE Public Policy Review, 1(1), 2020
  21. 21. Pennycook, G., Cannon, T. D., and Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147(12), 1865
  22. 22. Dustin P Calvillo, Bryan J Ross, Ryan JB Garcia, Thomas J Smelter, and Abraham M Rutchick. Political ideology predicts perceptions of the threat of COVID-19 (and susceptibility to fake news about it). Social Psychological and Personality Science, page 1948550620940539, 2020
  23. 23. Noah E Friedkin and Eugene C Johnsen. Social influence and opinions. Journal of Mathematical Sociology, 15(3–4):193–206, 1990
  24. 24. J Scott Brennen, Felix Simon, Philip N Howard, and Rasmus Kleis Nielsen. Types, sources, and claims of COVID-19 misinformation. Reuters Institute, 7:3–1, 2020
  25. 25. Larson, H. J. (2020). A lack of information can become misinformation. Nature, 306-306
  26. 26. Oberiri Destiny Apuke and Bahiyah Omar. Fake news and COVID-19: modelling the predictors of fake news sharing among social media users. Telematics and Informatics, page 101475, 2020
  27. 27. Bertram Düring, Peter Markowich, Jan-Frederik Pietschmann, and Marie-Therese Wolfram. Boltzmann and Fokker–Planck equations modelling opinion formation in the presence of strong leaders. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 465(2112):3687–3708, 2009
  28. 28. Bertram Düring and Marie-Therese Wolfram. Opinion dynamics: inhomogeneous Boltzmann-type equations modelling opinion leadership and political segregation. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 471(2182), 2015
  29. 29. Rafael Prieto Curiel and Steven Richard Bishop. Modelling the fear of crime. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 473(2203), 2017
  30. 30. Claudio Castellano, Santo Fortunato, and Vittorio Loreto. Statistical physics of social dynamics. Reviews of Modern Physics, 81(2):591, 2009
  31. 31. Guillaume Deffuant, David Neau, Frederic Amblard, and Gérard Weisbuch. Mixing beliefs among interacting agents. Advances in Complex Systems, 3(4):87–98, 2000
  32. 32. Luís M.A. Bettencourt, Ariel Cintrón-Arias, David I. Kaiser, and Carlos Castillo-Chávez. The power of a good idea: quantitative modeling of the spread of ideas from epidemiological models. Physica A: Statistical Mechanics and its Applications, 364:513–536, 2006
  33. 33. Deepak Bhat and S Redner. Polarization and consensus by opposing external sources. Journal of Statistical Mechanics: Theory and Experiment, 2020(1):013402, 2020
  34. 34. Janusz Holyst, Krzysztof Kacperski, and Frank Schweitzer. Social impact models of opinion dynamics. Annual Reviews of Computational Physics, 9:253–273, 2002
  35. 35. Krzysztof Kacperski and Janusz Holyst. Opinion formation model with strong leader and external impact: a mean field approach. Physica A: Statistical Mechanics and its Applications, 269(2):511–526, 1999

Written By

Rafael Prieto Curiel

Submitted: 29 October 2020 Reviewed: 02 March 2021 Published: 13 April 2021