Open access peer-reviewed chapter

The Evolving Interplay between Social Media and International Health Security: A Point of View

Written By

Keith Conti, Shania Desai, Stanislaw P. Stawicki and Thomas J. Papadimos

Submitted: 13 March 2020 Reviewed: 17 June 2020 Published: 24 July 2020

DOI: 10.5772/intechopen.93215

From the Edited Volume

Contemporary Developments and Perspectives in International Health Security - Volume 1

Edited by Stanislaw P. Stawicki, Michael S. Firstenberg, Sagar C. Galwankar, Ricardo Izurieta and Thomas Papadimos

Chapter metrics overview

918 Chapter Downloads

View Full Metrics


Human communication and interaction had been rapidly evolving with the advent and continuing influence of social media (SM) thereby accelerating information exchange and increasing global connectivity. Despite clear advantages, this new technology can present unintended consequences including medical misinformation and “fake news.” Although International Health Security (IHS) stands to benefit tremendously from various SM platforms, high-level decision-makers and other stakeholders must also be aware of the dangers related to its intentional and unintentional misuse (and abuse). An overview of SM utility in fighting disease, disseminating life-saving information, and organizing people and teams in a constructive fashion is discussed herein. The potential negatives associated with SM misuse, including intentional and unintentional misinformation, as well as the ability to organize people in a disruptive fashion, will also be presented. Our treatise will additionally outline how deliberate misinformation may lead to harmful behaviors, public health panics, and orchestrated patterns of distrust. In terms of both its affirmative and destructive considerations, SM can be viewed as an asymmetric influencing force, with observed effects (whether beneficial or harmful) being disproportionately greater than the cost of the intervention.


  • global health security
  • International Health Security
  • social media
  • misinformation
  • fake news

1. Introduction

International Health Security (IHS) includes a broad range of intertwined subject areas are related to human security [1, 2, 3, 4]. Introduced by the United Nations in the mid-1990s, the definition of “health security” is nebulous because of an overlap between its constituent “health” and “security” components [5]. Thus, there continues to be debate about the degree of such overlap and its implications. In addition, although traditional IHS applications focus on bio-terrorism and emerging infectious diseases (EIDs) [6, 7], the concepts of “health” and “security” can be applied more broadly when different man-made and non-man-made factors, from climate change to cyber health security are considered [8, 9, 10, 11].

The domination of personal and professional human interactions by the increasingly more powerful and sophisticated social media (SM) platforms brought with it many benefits and challenges. For example, SM has introduced conditions for cyberbullying, voter/public opinion manipulation, and criminal activity despite its well-intended attempts to bring people together in the digital, boundaryless environment [12]. Here, we will discuss how SM can create both constructive and destructive forces, focusing specifically on IHS and related topic areas.


2. The definition of social media

SM’s definition and sphere of influence extend to “any medium involving user-generated content” [13, 14]. Assorted subdomains within SM may include blogs or microblogs, interactive forums, message boards, social networks, wikis, as well as other types of audiovisual media-based platforms (e.g., photo or video sharing sites) [13, 14, 15]. Comprehensive acceptance of SM contributed to a startling rise in the overall amount of information being shared, the acceleration and pervasiveness of such sharing, as well as the ability to interact across nearly all areas of human activity, inclusive of public health and medical care [12, 16]. It must be noted that the vast volume of shared information on SM is largely unfiltered and difficult to verify.

The reach and breadth of SM platforms and related communication tools have developed exponentially and matured as the Internet has expanded [14, 17, 18]. During this time, the purpose and focus of various SM tools remained poorly defined [19, 20]. While altruism is at the heart of most large SM platforms, it is difficult to promote (or enforce) its charitable and humanitarian application by end-users, especially in the context of best interest of communities [21, 22, 23]. As with all discourses involving the interchange of knowledge, transparency becomes a paramount concern in that any information disclosures are made in a manner that is both open and honest, in effect strengthening the legitimacy of the involved SM platform [24, 25, 26]. It is critical that SM adheres to accepted ethical and scientific norms and that this adherence applies to the full range of related domains, including bioinformatics, statistical testing, peer-review, and independent validation [27, 28, 29, 30, 31, 32, 33, 34]. It should be emphasized that particularly when it pertains to SM in the context of the subsequent IHS arguments made in this chapter, the most popular, seemingly persuasive, and commonly repeated messaging does not always constitute the absolute truth or reliable information, and that one should be free to question and challenge any data he/she is presented with [35, 36, 37, 38, 39, 40]. This is especially applicable in the setting of question-and-answer format of information exchange, where both relevance and quality of information are critical [40].


3. Social media platforms: “weaponizing” human emotions and interactions

The very presence of today’s Internet has created the expectation of internationalization and universal information sharing amid the rapidly evolving frameworks of technological and social change [41, 42, 43]. As the Internet matured, the type and presentation of the information itself evolved, with increasing participation of highly diversified, user-furnished content [44]. Human beings have always valued the stimuli provided by their senses, either consciously or subconsciously. People tend to create various reference points, both to self and others, allowing the construction of an environment that psychologically conditions its participants. This may lead to a compulsive feedback loop that emerges from intense competitive pressures, with users trying to out-compete themselves and others in search of external affirmation [45, 46, 47, 48, 49]. As part of its continued development, the Internet became rich in various audio-visual representations specifically designed to show appreciation or depreciation of the stimuli [49, 50, 51]. From a historical perspective, the first “like” button has been attributed to Vimeo, a video sharing platform that appeared in 2005 [52, 53]. Vimeo designers were inspired by Diggs, a Website that encouraged the clicking of a button labeled “digg” when a rewarding picture was observed by an end-user or an interesting article was read. Similar concepts have been fully embraced across the SM sphere and serve as the “fuel” in the highly gamified “ratings” competition [53, 54, 55, 56].

The concept of “liking” or providing “virtual endorsement” to an information snippet on an SM platform has sparked intense research into the implications and consequences of such an action that, on initial blush, appears benign [57, 58, 59]. More specifically, there is evidence that an action of providing a “like” creates a basis for a “directed voting” or reward system of sorts [59, 60]. Early research by Davey et al. [61] investigated the effect of “being liked” on specific regions of the brain. Primary reward and self-related regions were activated under such conditions (e.g., nucleus accumbens, ventral tegmentum midbrain, ventromedial prefrontal cortex, and amygdala among others) [61]. Furthermore, there was a proportionately greater activation of the these neurological regions in response to “being liked” by individuals who are more highly regarded [61]. Sherman et al. [62] used a computer program resembling Instagram (Facebook, Inc., Menlo Park, California, USA) to investigate regions of the brain that exhibited significant activity when an image was “liked.” Their results showed statistically significant activation in the dorsal and ventral striatum, ventromedial prefrontal cortex, midbrain, and amygdala when “liking” an image versus simply pressing “next” to view the subsequent image [62]. The former regions were implicated in reward and the latter being implicated in reward processing [63, 64]. Sherman et al. [62] did note that the esthetic quality of the image itself can lead to activation of similar pathways. If both the “act of liking” and the act of “receiving a like” are being interpreted as rewards, it becomes clear as to why SM has been expanding so quickly. However, this also leads to the question, what are the consequences of such rewards?

Studies have linked these changes to dopaminergic system and its corresponding communication with the striatum [65]. When looking at the effects of “like” from the neuroscience perspective, functional MRI studies of SM users suggest that the “popularity of a photo” has a significant effect on its viewer’s perception. A popular photo was more likely to receive more “likes” from SM peers regardless of the activity being portrayed [66]. Sherman et al. [65] went on to discuss the quantification of social endorsement as an important example of sociocultural learning. Moreover, the question arises as to whether individuals actively engaged in SM-based discourses are more likely to neglect direct human interactions in favor of reaching larger, “virtual and impersonal” audiences [46, 67, 68]. If so, what are the implications of such conditioning to the ability to critically evaluate information encountered on SM platforms? Likewise, how does one sort out what is real versus what is virtual, as well as the impact of information upon each of these domains?

Before venturing back to the primary discussion of the relationship between SM and IHS, the concepts of “vague-booking” and “mediated lurking” should be noted [66, 69]. Berryman et al. [66] define “vague-booking” as sharing “ambiguous but alarming posts” to attract attention. “Mediated lurking” on the other hand is defined as becoming a member/user of a SM platform but “wishing to go largely undetected” [69, 70]. Finally, one would be remiss without mentioning the potential for “cyberbullying” and “cyber aggression” [71, 72]. These phenomena can create a real and damaging link between online and real-life behaviors [71, 73, 74]. Aggressive online behaviors have been categorized into subtypes, which include hostile aggression (e.g., “an act of aggression stemming from a feeling of anger and aimed at inflicting pain or injury”) and instrumental aggression (e.g., “an intention to hurt the other person, but the hurting takes place as a means to some goal other than causing pain”) [75]. Within this general context, SM facilitates and escalates negative behaviors that may serve to “deliver aggression,” while not being physically present, yet directed toward the person of interest. Jamison et al. [76] investigated the types of malicious actors that are found on Twitter (Twitter, San Francisco, California, USA) and the potential ramifications of their influence. Three categories of malicious actors were identified: (1) automated accounts (i.e., bots without human influence once created), (2) semiautomated accounts (i.e., bots with some degree of human influence), and (3) malicious humans [76]. Malicious SM actors have been labeled “trolls,” with the associated activity of “trolling” defined as “posting denigrating and inflammatory messages in order to argue and/or emotionally upset individuals” [77]. Each form allows for altering influence, as well as delivering aggression. In particular, if the actors can deliver an aggressive message with a disregard for any ramifications, they are in fact executing the purest form of passive aggression, a demonstration of hostile feelings in a non-confrontational manner [78, 79]. All of the above definitions, characteristics, behavioral patterns, and consequences play an important role in the “weaponization” of SM as it relates to IHS. Within this psychological context, a discussion of various aspects of concern regarding targeted SM content manipulation is warranted [80].


4. Social media in public health

As outlined in previous sections, SM has several emerging real-life applications in public health [81]. It is a powerful platform for real-time data collection, especially during fast-moving events such as epidemics or outbreaks [81, 82]. User inputs on SM platforms may help with the detection—and subsequent mapping—of geographic patterns for disease-specific signs or symptoms, confirmed cases, and/or other relevant parameters [83]. The resultant data can then be filtered, tracked, collected, analyzed/modeled, and reported [84, 85].

The use of SM to analyze various aspects of disease outbreaks (e.g., prediction, detection, and tracking) was described in the early 2010s by several independent groups [86, 87, 88, 89]. SM was felt to be instrumental in containing the Ebola outbreak in Nigeria through enhanced information sharing and coordination between front line personnel [85]. While Internet “search engines” are valued tools, primarily because they can be used in the leveraging of targeted marketing and sales, their use in characterizing the epidemiology, and geographic evolution of an emerging disease [90, 91, 92] as well as other more scientifically focused endeavors [93, 94] is of unquestionable worth. Increased frequency of specifically tracked search queries, such as “how does one prevent the flu,” “what is the treatment of the flu,” and “what are the most common symptoms of the flu” have shown accuracy and temporal correlation with the extent of disease spread and its prevalence, especially when contrasted against more traditional means of tracking outbreak progression. There is also a strong correlation between trends identified by “Internet search engines” and phenomena such as emergency department visits by patients with influenza [95]. A striking example of the correlation between Google Trends and an emerging infectious threat was recently demonstrated in an “infodemiological” study of the Wuhan coronavirus (2019-nCoV) [96]. Still, it is not surprising that some pragmatic researchers urge caution when using such information in the absence of complete epidemiological understanding, context, and expert interpretation [97, 98].

Despite some flaws, strict adherence to proper scientific methodology and structured peer review can provide reasonably robust ways of enforcing proper balance to help minimize the risk of propagation of false or misleading information [99, 100, 101]. To ensure wide adoption, SM platforms tend to be open and inviting, thus providing an essentially unrestricted forum for the exchange of ideas. Much of this occurs in the name of “protecting and enabling free speech.” Consequently, short of legal action, objective accountability for communicated content is lacking at best [102, 103, 104, 105]. In certain scenarios, unrestricted online attacks can be very destructive, including consequences in both in the “real” and “digital” domains, and both personally and professionally [12, 106]. Some forms of malicious SM participation have been discussed in an aforementioned section. Among established SM platforms, there seems to be a struggle to find a balance between self- or user/community-censorship and various forms of “online aggression” [12, 72, 107]. Significant spillover into public health can occur, especially among minors, and can have tremendous impact when “online actions” translate into “real-world implications” [108, 109, 110, 111].


5. International Health Security: constructive uses of social media

It has been shown that SM-based vigilance can be useful for outbreak or epidemic interception, tracking, and data reporting [86, 87, 112, 113]. In the midst of the 2014 Ebola outbreak in West Africa, isolated islands of the disease were successfully contained leveraging SM-based coordination tools, including targeted identification of misinformation and its prompt correction [85]. SM can also be valuable to public health community when determining how human networks behave in the context of social determinants of health (e.g., health behaviors, resource availability, and general compliance) [114]. Thus, SM may be particularly helpful in promoting positive health behaviors [115]. All of the above implementations of SM in public health are now being actively employed during the Wuhan 2019-nCoV outbreak, with focus on augmented intelligence in the context of preventing the spread of the disease [116, 117]. An additional use case for SM, as reported in conjunction with the 2019-nCoV, is the promotion of psychological crisis interventions using popular SM channels to share strategies for dealing with stress and anxiety associated with the outbreak [118].


6. International Health Security: potentially harmful aspects of social media use

The sharing of non-peer-reviewed information over SM entails the potential of transmitting misinformation or misinterpretation of such unfiltered content [119], especially if it is out of context. The hourly volume of SM messaging that may contain “inaccurate or fake news” outnumbers “fact checking” capabilities by as much as 10-fold [120]. In addition, the average time between the release of “fake news” and any “fact checking” response may be greater than 12 hours, thereby causing significant damage before the misinformation can be rectified [120]. Thus, misinformation introduced into public discourse can be substantial if placed by a highly motivated and appropriately equipped individual (or group). In the context of IHS, the consequences can be profound when “fake news” is carefully crafted and communicated in a strategic manner (Table 1). “Fake news” can be damaging in several ways, from spreading false claims (e.g., that risk of vaccines is greater than their benefits), to misinforming the public regarding a particular health condition (e.g., misstating signs and symptoms of a viral infection). Programmatic moderation of content is one of the solutions that SM outlets have embraced, but this process is very resource-intensive, may be quite cumbersome, and may not apply universally across different types of data [121, 122]. Additionally, in regard to SM and addiction, there are significant correlations between symptoms of addiction to technology use and mental health conditions [123]. Furthermore, more advanced technological applications, such as virtual reality, have been associated with dissociation and lower “sense of presence” in objective reality [124]. Various psychological aspects and nuances associated with SM use were discussed earlier in the chapter, and the reader is referred there to avoid content redundancy.

Harmful behaviorDescriptionComments
CyberbullyingSM content that is of intimidating or threatening in character, with potential for risk to self or others. Associated harm may be both mental and physicalRobust surveillance, reporting, and prompt remediation; establishing and enforcing accountability, as applicable
Fabricated or “fake” newsIntentional release of erroneous news and information via both traditional and SM. Consequences can be both unintended and unpredictable, including intentionally or unintentionally harmful or damaging behaviors, or misdirected actionAdherence to established news reporting standards; Sound editorial policies and procedures. Appropriate fact-checking and prompt intervention to avoid any resultant or potential harm
Intentional misinformationLeading individuals to perform actions that may have harmful consequences on self or others. Release of intentional misinformation may result in random and unpredictable downstream events. The process involves the end-user receiving, processing, and implementing any information before actual harm can resultEmpowering SM moderators to remove harmful content; Vigilance, fact-checking, and timely intervention to prevent any potential or actual damage from dissemination of false information; legal consequences for intentional introduction of potentially damaging misinformation
MisinterpretationErroneous conclusions made regarding data generated or compiled from SM inputs. Although usually not intentional, this may lead to misguided planning or implementations, with some potential for harmCareful cross-checking and verification of both the source data and the analytical methods; use of established decision-making algorithms and verification mechanisms

Table 1.

Primary modes of deleterious behavioral patterns described on social media, with associated characteristics, potential for negative consequences, and proposed remedial/corrective measures.

SM, social media.

Although SM can provide an excellent medium for open discourse on a broad range of topics, the troubling reality is that SM may foster a society with fewer defined boundaries (e.g., “real-life friend” versus “social media friend”) [125, 126]. Thereby no longer plainly demarcating a defined personal space and presenting a risk of intentional or unintended “invasion” [127, 128]. Although SM’s negative aspects have been addressed, there is little doubt that responsible use of SM facilitates the public awareness of various health/mental health matters and thus can provide an overall positive influence.

There are examples of SM as a successful tracking tool of actual disease outbreaks/epidemics [86]. However, the risk of “false alarms” does exist, potentially affecting the utility of SM as a useful public health tracking tool at the population level [129]. The tracked data often lack specificity [130], can be misinterpreted and/or distorted, and subsequently promoted by influential personas without appropriate training or content expertise [131, 132]. Reversing damage caused by distortion of the facts and misinterpretation can be challenging [133, 134, 135, 136, 137]. For example, the controversy regarding the alleged association between childhood vaccination and autism exemplifies how concerns of global nature can be distorted in a highly publicized fashion [138, 139, 140]. Despite multiple research studies that were unable to definitively prove a connection between childhood vaccines and autism, large groups within the society still advocate otherwise [141, 142, 143]. Unfortunately, misplaced trust tends to be given to the messengers and various SM tools, rather than the authorities and the medical community [144, 145, 146]. These considerations need to be taken from the perspective of IHS, especially when one realizes that the “most prominent voice” is not universally the one with the correct or the best answer [147, 148].

There is a clear and present danger of malignant actors abusing SM to spread disinformation that may potentially lead to third-party harm [149, 150]. In 2009, in the midst of the swine influenza season, there was a substantial uptick in SM reporting of various conspiracies about the flu virus, its alleged genetically engineered origins, and other unfounded rumors [151, 152, 153]. Thousands of user views of the questionable material were recorded by involved SM platforms, which was likely a significant underestimate [151, 152, 153], with literally thousands of other search results on the “swine flu epidemic” topic [154]. Similar sources of misinformation continue to be abundant despite their unfounded assumptions and obvious danger [155, 156]. For example, SM outlets are rich with unfounded speculation regarding the most recent coronavirus outbreak in Wuhan, China [157, 158]. Vijaykumar et al. [159] describe the so-called “social media virality risk,” which attempts to quantify the amplification of the population’s perception of public health risk in the overall context of “social media effectiveness.” Their conceptual model is called Risk Amplification through Media Spread (RAMS) [159].

Of critical importance, SM platforms may be preferred by individuals (or groups) who perceive the lack of other means to express their beliefs and thoughts. As such, SM can be thought of as “virtual aggregators” for people who share generally compatible and/or synergistic viewpoints [160, 161, 162]. Moreover, there is a non-trivial risk of evolution of such “virtual groups” beyond “online presence” [163]. Traditionally, unconventional or controversial beliefs tended to be often marginalized by the society, with relative lack of effective platforms to share thoughts and ideas [164, 165, 166]. In current times, essentially every idea can have an “online home” with SM actively facilitating the aggregation of like-minded people into groups [167, 168]. And although “virtual homes” can become hubs for creativity, innovative thinking, scientific discovery, understanding, diversity, and idea exploration, they can also be the sources of damaging misinformation. Finally, there is at least some evidence that SM may also create an avenue for people to explore different points of view, thus potentially providing an avenue for “new perspectives and open mindedness” [159, 169]. Table 2 shows an overview of characteristics likely to be correlated with misinformation, with a focus on SM-related aspects.

Anonymous authorshipAlthough some press/news releases may not give credit to specific author(s) or source(s), stories/information from anonymous sources (especially if impossible to independently verify) should prompt additional confirmation; this is especially true when making policies or implementing procedures based on such information
Attractive or “catchy” headlineA “catchy” headline tends to attract a larger number of viewers and may help enhance subsequent dissemination of (mis)information
Cult-like followershipConcepts that attract cult-like (often fanatical) followership tend to be more prone to aggregate “SM communities” around them. This, in turn, may assist in further propagation of biased/false information
Disclaimer regarding the information is provided alongside the contentIn most cases, sources that feature a disclaimer should be considered with caution, mainly because disclaimers tend to be used in the setting of potential liability risk
Dramatic or otherwise emotional nature of the contentIf the content contains dramatic or emotional language, and/or leads to a strong emotional response, it was most likely intended to do so. Oftentimes, hidden agenda(s) may be present
Forward-looking claims or predictive statements may be embeddedNews content that provides specific claim of a future event is most likely unauthentic. Likewise, when faced with reports of an effective therapy, end-users should carefully seek verification and remain skeptical
Information superficially “appears” to be legitimateWhen “crafted and disseminated” in a specific and deliberate fashion, wrong information may appear legitimate. Only after a more careful/detailed review, factual or logical inconsistencies may be found
Reputable source/origin of the information is claimed by authorsFN may gain more credibility if the source “appears” legitimate. Having said that, any significant claims from an apparently respected source must be substantiated and verified, especially if the story’s author is not clearly identifiable (see above under “anonymous authorship”)
Propagation by high-profile individualsSuperficial appearance of credibility can often be maintained around fictional story accounts, especially when reputable individuals (e.g., community leaders, politicians, scientists) participate in information dissemination
The story is too good to be true…In cases where SM information appears to be “too good to be true,” the end-users should remain critical and question any such reports/stories
Unusual or atypical domain name/uniform resource locator (URL)When content is located on a Website, or originates from a source with an unusual domain name, suspicious user identification, and/or URL, skepticism is always wise

Table 2.

Factors that may signal that one is exposed to attempts at the dissemination of “fake news” and misinformation on social media platforms.

FN, fake news; SM, social media; URL, uniform resource locator.

Equipped with SM tools, “malignant actors” can develop a substantial potential for harm and otherwise destructive consequences. Thus, the malignant use of SM poses a risk to IHS. Beyond this, unimpeded access to Internet infrastructure, when “passively” permitted by countries/governments, can create conditions for the “malignant actor” or “fictitious public discourse” to lead to societal disruption and harm, involving both institutions and individuals [170, 171]. The subsequent sections will discuss malignant use cases where SM manipulation is centralized (e.g., government) versus decentralized (individuals, special interests, and non-governmental groups).

When centralized control of Internet infrastructure (e.g., government) is present, a narrative can in theory be created at the top echelons of power, and wide dissemination of messaging can be reasonably easily achieved. Under well-intended circumstances, this capability should be used to facilitate education and positive health behaviors. Having said that, if a central authority is the “bad actor” and their messaging is used to “manipulate” public discussions, the message may constitute an attempt to influence various policy objectives, such as a particular therapy, vaccine, or preventive health measure (s) [172, 173]. Corrupt central authorities may also manipulate SM messaging for monetary gain and/or political goals [174, 175, 176]. In this context, focused SM messages may involve falsified statements, manipulated statistics, and edited images. Such messaging can be directed at a particular target, including religious, ethnic, racial, political, and gender-specific or other groups [177, 178, 179].

When central control is not present, but laws and/or executive orders render SM fully unrestricted to all potential actors, then the so-called peripheral control can be attained [180, 181, 182]. Under such circumstances, individuals or interests can freely target selected individuals, groups, and organizations for manipulation via dissemination of false information via SM [183, 184, 185, 186]. In the arena of peripheral control, third-party entities (e.g., interest groups) who wish to influence a particular policy are able to use SM disguised as indigenous individuals or organizations, modulating the discourse to their benefit [187]. Inherent to modern capitalist societies is that major SM platforms are for-profit entities and thus can be maneuvered using financial resources to influence a society from the periphery [130, 188]. Finally, there is a less likely possibility that the more dominant SM platforms may evolve into new “central authorities” over time.

From public health standpoint, the potential for harm is both real and significant. Hypothetically, a broad range of harmful actions may be initiated using misguided SM, including the manipulation of populations to report to wrong/inappropriate locations for assistance [189, 190], misdirecting local populations with regard to the evacuation routes, sanctuaries/safe places, as well as creating public distress that leads to waste or misuse of precious public health resources [191, 192, 193]. Of importance, healthcare workers could theoretically be manipulated through SM platforms against accepting the risks associated with care for those in need, thus effectively negating a provider’s professional obligations [194, 195]. Although various conspiracy theories have been propagated throughout the history, their dissemination has escalated in the era of multiple SM platforms [196, 197]. As a result, the risk exists of an “engineered reality” by individuals or entities, using SM as the ultimate “mind bending” tool [198]. The theme of SM contributing to fear and misinformation or disinformation in the IHS context continues into the 2020s, with similar concerns being noted around the SARS-CoV-2 outbreak [199].

Finally, ongoing high-level efforts are underway to reduce the harm from SM-based disinformation at the national and international levels. More specifically, there is an increasing number of governmental and non-governmental organizations that actively focus on this important health security problem. This growing list includes the National Cyber Directorate in Israel, the National Security Communications Team (NSCT) in the United Kingdom, the Australian Security Intelligence Organization (ASIO) and the Department of Home Affairs in Australia, the Security Intelligence Service and the Defense Intelligence Service in Denmark, and the National Security Agency in France, among many others. In addition, major SM providers are signatories to the European Union (EU) Code of Practice on Disinformation. However, the same providers are not bound by this Code of Practice outside of the geographic boundaries of the Union [200].


7. Conclusions

Continuous worldwide information sharing fosters innovation and knowledge creation, thus facilitating humanity’s ongoing progress. The same is true regarding the implementation of SM tools in the area of public health/health security. Although there are undoubtedly important benefits of SM in this realm, the in-depth understanding of modern SM platforms is still somewhat limited. The introduction of SM into the domain of public health presented the community with a unique opportunity for the development of highly efficient, integrated tools for disease tracking and epidemiologic trend identification. At the same time, users must remain cautious because the potential for both intentional and unintentional misuse of data may be present, resulting in substantial and often unpredictable harm. Finally, malignant actors in control of the SM narrative can cause deliberate harm through the intentional propagation of “fake news” and misinformation. Consequently, risks and benefits associated with the use of SM in the realm of public health/IHS must be carefully considered to minimize any negative downstream consequences.


  1. 1. Rodier G. New rules on international public health security. Bulletin of the World Health Organization. 2007;85(6):428-430
  2. 2. Aldis W. Health security as a public health concept: A critical analysis. Health Policy and Planning. 2008;23(6):369-375
  3. 3. Chiu Y-W et al. The nature of international health security. Asia Pacific Journal of Clinical Nutrition. 2009;18(4):679
  4. 4. Bakanidze L, Imnadze P, Perkins D. Biosafety and biosecurity as essential pillars of international health security and cross-cutting elements of biological nonproliferation. BMC Public Health. 2010;10(1):S12
  5. 5. Programme UND. Human Development Report. New York: Oxford University Press; 1994. p. 1994
  6. 6. Scharoun K, Van Caulil K, Liberman A. Bioterrorism vs. health security—Crafting a plan of preparedness. The Health Care Manager. 2002;21(1):74-92
  7. 7. Heymann DL et al. Global health security: The wider lessons from the West African Ebola virus disease epidemic. The Lancet. 2015;385(9980):1884-1901
  8. 8. Reiter P et al. Global warming and malaria: A call for accuracy. The Lancet Infectious Diseases. 2004;4(6):323-324
  9. 9. Hobson C, Bacon P, Cameron R. Human Security and Natural Disasters. New York, NY: Routledge; 2014
  10. 10. Brown T. ‘Vulnerability is universal’: Considering the place of ‘security’and ‘vulnerability’within contemporary global health discourse. Social Science & Medicine. 2011;72(3):319-326
  11. 11. Kay A, Williams O. Global Health Governance: Crisis, Institutions and Political Economy. Hampshire, UK: Springer/Pallgrave Macmillan; 2009
  12. 12. Stawicki TT et al. From “pearls” to “tweets”: How social media and web-based applications are revolutionizing medical education. International Journal of Academic Medicine. 2018;4(2):93
  13. 13. Miller D et al. What is social media. How the World Changed Social Media. 2016;1:1-8
  14. 14. Boyd DM, Ellison NB. Social network sites: Definition, history, and scholarship. Journal of Computer-Mediated Communication. 2007;13(1):210-230
  15. 15. Taprial V, Kanwar P. Understanding Social Media. London, UK: Bookboon; 2012
  16. 16. Thackeray R et al. Adoption and use of social media among public health departments. BMC Public Health. 2012;12(1):242
  17. 17. McMillan SJ, Morrison M. Coming of age with the internet: A qualitative exploration of how the internet has become an integral part of young people’s lives. New Media & Society. 2006;8(1):73-95
  18. 18. Graham MW, Avery EJ, Park S. The role of social media in local government crisis communications. Public Relations Review. 2015;41(3):386-394
  19. 19. Jenkins H et al. Confronting the Challenges of Participatory Culture: Media Education for the 21st Century. Cambridge, MA: Mit Press; 2009
  20. 20. Hansen DL, Shneiderman B, Smith MA. Analyzing Social Media Networks with NodeXL: Insights from a Connected World. Burlington, MA: Morgan Kaufmann; 2010
  21. 21. Li H, Sakamoto Y. Social impacts in social media: An examination of perceived truthfulness and sharing of information. Computers in Human Behavior. 2014;41:278-287
  22. 22. Pavlíček A. Social Media–the Good, the Bad, the Ugly. IDIMT-2013. Prague, Czech Republic; 2013. p. 139
  23. 23. Wu L et al. Mining misinformation in social media. Big Data in Complex and Social Networks. 2016:123-152
  24. 24. Wasike J. Social media ethical issues: Role of a librarian. Library Hi Tech News. 2013;30(1):8-16
  25. 25. Scott PR, Jacka JM. Auditing Social Media: A Governance and Risk Guide. Hoboken, NJ: John Wiley & Sons; 2011
  26. 26. Goldsmith A. Disgracebook policing: social media and the rise of police indiscretion. Policing and Society. 2015;25(3):249-267
  27. 27. Goodman KW, Cushman R, Miller RA. Ethics in biomedical and health informatics: Users, standards, and outcomes. In: Biomedical Informatics. London, UK: Springer; 2014. pp. 329-353
  28. 28. Gritzalis D et al. History of information: the case of privacy and security in social media. In: Proceedings of the History of Information Conference. Athens, Greece: Athens University of Economics & Business; 2014
  29. 29. Potts L. Social Media in Disaster Response: How Experience Architects Can Build for Participation. Philadelphia, PA: Routledge; 2013
  30. 30. Bricout JC, Baker PM. Leveraging online social networks for people with disabilities in emergency communications and recovery. International Journal of Emergency Management. 2010;7(1):59-74
  31. 31. Imran M et al. Processing social media messages in mass emergency: A survey. ACM Computing Surveys (CSUR). 2015;47(4):67
  32. 32. Castillo C. Big Crisis Data: Social Media in Disasters and Time-Critical Situations. Cambridge, England: Cambridge University Press; 2016
  33. 33. Korac-Boisvert N, Kouzmin A, et al. The dark side of info-age social networks in public organizations and creeping crises. Administrative Theory & Praxis. 1994:57-82
  34. 34. Bowdon MA. Tweeting an ethos: Emergency messaging, social media, and teaching technical communication. Technical Communication Quarterly. 2014;23(1):35-54
  35. 35. Flanagin AJ, Metzger MJ. Perceptions of Internet information credibility. Journalism and Mass Communication Quarterly. 2000;77(3):515-540
  36. 36. Johnson TJ, Kaye BK. Cruising is believing?: Comparing Internet and traditional sources on media credibility measures. Journalism and Mass Communication Quarterly. 1998;75(2):325-340
  37. 37. Flanagin AJ, Metzger MJ. The role of site features, user attributes, and information verification behaviors on the perceived credibility of web-based information. New Media & Society. 2007;9(2):319-342
  38. 38. Greer JD. Evaluating the credibility of online information: A test of source and advertising influence. Mass Communication and Society. 2003;6(1):11-28
  39. 39. Eastin MS. Credibility assessments of online health information: The effects of source expertise and knowledge of content. Journal of Computer-Mediated Communication. 2001;6(4):JCMC643
  40. 40. Bian J et al. Finding the right facts in the crowd: Factoid question answering over social media. In: Proceedings of the 17th International Conference on World Wide Web. Beijing, China. 2008
  41. 41. Knight GA, Cavusgil ST. Innovation, organizational capabilities, and the born-global firm. Journal of International Business Studies. 2004;35(2):124-141
  42. 42. Flanagin AJ, Metzger MJ. Digital media and youth: Unparalleled opportunity and unprecedented responsibility. Cambridge, MA: The MIT Press; 2008. pp. 5-28
  43. 43. Bakardjieva M, Feenberg A. Community technology and democratic rationalization. The Information Society. 2002;18(3):181-192
  44. 44. Shao G. Understanding the appeal of user-generated media: A uses and gratification perspective. Internet Research. 2009;19(1):7-25
  45. 45. Andreassen CS, Pallesen S, Griffiths MD. The relationship between addictive use of social media, narcissism, and self-esteem: Findings from a large national survey. Addictive Behaviors. 2017;64:287-293
  46. 46. Blackwell D et al. Extraversion, neuroticism, attachment style and fear of missing out as predictors of social media use and addiction. Personality and Individual Differences. 2017;116:69-72
  47. 47. Monacis L et al. Social networking addiction, attachment style, and validation of the Italian version of the Bergen Social Media Addiction Scale. Journal of Behavioral Addictions. 2017;6(2):178-186
  48. 48. LaRose R, Kim J, Peng W. Social networking: Addictive, compulsive, problematic, or just another media habit? In: A Networked Self. Philadelphia, PA: Routledge; 2010. pp. 67-89
  49. 49. Homayoun A. The secret social media lives of teenagers. New York, NY: The New York Times Company. 2017;7:1-2
  50. 50. Khan ML. Social media engagement: What motivates user participation and consumption on YouTube? Computers in Human Behavior. 2017;66:236-247
  51. 51. Gunitsky S. Corrupting the cyber-commons: Social media as a tool of autocratic stability. Perspectives on Politics. 2015;13(1):42-54
  52. 52. Tavares S. Paratextual prometheus. Digital paratexts on YouTube, vimeo and prometheus transmedia campaign. International Journal of Transmedia Literacy (IJTL). 2015;1:175-195
  53. 53. Pullen JP. How Vimeo became hipster YouTube. 2020 [07 January 2011]. Available from:
  54. 54. Brophy KM. Social media platform with gamification of user-generated content. 2017. Google Patents
  55. 55. Matejic N. Social Media Rules of Engagement: Why Your Online Narrative is the Best Weapon During a Crisis. Hoboken, NJ: John Wiley & Sons; 2015
  56. 56. Yazdani A, Lee J-S, Ebrahimi T. Implicit emotional tagging of multimedia using EEG signals and brain computer interface. In: Proceedings of the first SIGMM Workshop on Social Media. Beijing, China: ACM; 2009
  57. 57. Chen W, Lee K-H. Sharing, liking, commenting, and distressed? The pathway between Facebook interaction and psychological distress. Cyberpsychology, Behavior and Social Networking. 2013;16(10):728-734
  58. 58. Lee S-Y, Hansen SS, Lee JK. What makes us click “like” on Facebook? Examining psychological, technological, and motivational factors on virtual endorsement. Computer Communications. 2016;73:332-341
  59. 59. Meshi D, Tamir DI, Heekeren HR. The emerging neuroscience of social media. Trends in Cognitive Sciences. 2015;19(12):771-782
  60. 60. Haitsuka SA et al. Live streaming broadcast service with artist and fan competitive reward system. 2016. Available from: [Accessed: 22 July 2020]
  61. 61. Davey CG et al. Being liked activates primary reward and midline self-related brain regions. Human Brain Mapping. 2010;31(4):660-668
  62. 62. Sherman LE et al. What the brain ‘Likes’: Neural correlates of providing feedback on social media. Social Cognitive and Affective Neuroscience. 2018;13(7):699-707
  63. 63. Balleine BW, Delgado MR, Hikosaka O. The role of the dorsal striatum in reward and decision-making. Journal of Neuroscience. 2007;27(31):8161-8165
  64. 64. Achterberg M et al. Control your anger! The neural basis of aggression regulation in response to negative social feedback. Social Cognitive and Affective Neuroscience. 2016;11(5):712-720
  65. 65. Sherman LE et al. Peer influence via instagram: Effects on brain and behavior in adolescence and young adulthood. Child Development. 2018;89(1):37-47
  66. 66. Berryman C, Ferguson CJ, Negy C. Social media use and mental health among young adults. Psychiatric Quarterly. 2018;89(2):307-314
  67. 67. Chambers D. Social Media and Personal Relationships: Online Intimacies and Networked Friendship. Hampshire, UK: Springer/Palgrave Macmillan; 2013
  68. 68. Comunello F, Anzera G. Will the revolution be tweeted? A conceptual framework for understanding the social media and the Arab Spring. Islam and Christian–Muslim Relations. 2012;23(4):453-470
  69. 69. Child JT, Starcher SC. Fuzzy Facebook privacy boundaries: Exploring mediated lurking, vague-booking, and Facebook privacy management. Computers in Human Behavior. 2016;54:483-490
  70. 70. Braithwaite DO, Suter EA, Floyd K. Engaging Theories in Family Communication: Multiple Perspectives. Philadelphia, PA: Routledge; 2017
  71. 71. Grigg DW. Cyber-aggression: Definition and concept of cyberbullying. Journal of Psychologists and Counsellors in Schools. 2010;20(2):143-156
  72. 72. Smith PK. Cyberbullying and cyber aggression. In: Handbook of School Violence and School Safety. Routledge; 2012. pp. 111-121
  73. 73. Wright MF. Cyber victimization on college campuses: Longitudinal associations with suicidal ideation, depression, and anxiety. Criminal Justice Review. 2016;41(2):190-203
  74. 74. Nevin AD. Cyber-Psychopathy: Examining the Relationship Between Dark E-Personality and Online Misconduct. London, Ontario: The University of Western Ontario; 2015. Available from:
  75. 75. Buckels EE et al. Internet trolling and everyday sadism: Parallel effects on pain perception and moral judgment. Journal of Personality. 2019;87(2):328-340
  76. 76. Jamison AM, Broniatowski DA, Quinn SC. Malicious actors on Twitter: A guide for public health researchers. American Journal of Public Health. 2019;109(5):688-692
  77. 77. Maltby J et al. Implicit theories of online trolling: Evidence that attention-seeking conceptions are associated with increased psychological resilience. British Journal of Psychology. 2016;107(3):448-466
  78. 78. Einarsen S, Aasland MS, Skogstad A. Destructive leadership behaviour: A definition and conceptual model. The Leadership Quarterly. 2007;18(3):207-216
  79. 79. Spector PE, Fox S. An emotion-centered model of voluntary work behavior: Some parallels between counterproductive work behavior and organizational citizenship behavior. Human Resource Management Review. 2002;12(2):269-292
  80. 80. Bradshaw S, Howard P. Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation. 2017
  81. 81. Dredze M. How social media will change public health. IEEE Intelligent Systems. 2012;27(4):81-84
  82. 82. Schmidt CW. Trending now: Using social media to predict and track disease outbreaks. Environmental Health Perspectives. 2012;120(1):a30
  83. 83. Mackay IM, Arden KE. Middle East respiratory syndrome: An emerging coronavirus infection tracked by the crowd. Virus Research. 2015;202:60-88
  84. 84. Chunara R, Andrews JR, Brownstein JS. Social and news media enable estimation of epidemiological patterns early in the 2010 Haitian cholera outbreak. The American Journal of Tropical Medicine and Hygiene. 2012;86(1):39-45
  85. 85. Carter M. How Twitter may have helped Nigeria contain Ebola. BMJ. 2014;349:g6946
  86. 86. Schmidt CW. Trending Now: Using Social Media to Predict and Track Disease Outbreaks. National Institute of Environmental Health Sciences; 2012. Available from: [Accessed: 23 July 2020]
  87. 87. Xie Y et al. Detecting and tracking disease outbreaks by mining social media data. In: Twenty-Third International Joint Conference on Artificial Intelligence. Evanston, IL: Northwestern University; 2013. Available from: [Accessed: 23 July 2020]
  88. 88. Charles-Smith LE et al. Using social media for actionable disease surveillance and outbreak management: A systematic literature review. PLoS One. 2015;10(10):e0139701
  89. 89. Costa FF. Social networks, web-based tools and diseases: Implications for biomedical research. Drug Discovery Today. 2013;18(5-6):272-281
  90. 90. Chorianopoulos K, Talvis K. Flutrack. org: Open-source and linked data for epidemiology. Health Informatics Journal. 2016;22(4):962-974
  91. 91. Pelat C et al. More diseases tracked by using Google Trends. Emerging Infectious Diseases. 2009;15(8):1327
  92. 92. Carneiro HA, Mylonakis E. Google trends: A web-based tool for real-time surveillance of disease outbreaks. Clinical Infectious Diseases. 2009;49(10):1557-1564
  93. 93. Du RY, Hu Y, Damangir S. Leveraging trends in online searches for product features in market response modeling. Journal of Marketing. 2015;79(1):29-43
  94. 94. Zabin J, Brebach G. Precision Marketing: The New Rules for Attracting, Retaining, and Leveraging Profitable Customers. Hoboken, NJ: John Wiley & Sons; 2004
  95. 95. Dugas AF et al. Google Flu Trends: Correlation with emergency department influenza rates and crowding metrics. Clinical Infectious Diseases. 2012;54(4):463-469
  96. 96. Strzelecki A. Infodemiological Study Using Google Trends on Coronavirus Epidemic in Wuhan, China. arXiv preprint arXiv:2001.11021. 2020
  97. 97. Olson DR et al. Reassessing Google flu trends data for detection of seasonal and pandemic influenza: A comparative epidemiological study at three geographic scales. PLoS Computational Biology. 2013;9(10):e1003256
  98. 98. Analytica O. Coronavirus shows how China has changed since SARS. Emerald Expert Briefings, (oxan-db)
  99. 99. Chubin DE, Hackett EJ, Hackett EJ. Peerless Science: Peer Review and US Science Policy. Albany, NY: SUNY Press; 1990
  100. 100. Wilson EB. An Introduction to Scientific Research. New York, NY: Dover Publications, Inc.; 1990
  101. 101. Plaza M et al. The use of distributed consensus algorithms to curtail the spread of medical misinformation. International Journal of Academic Medicine. 2019;5(2):93
  102. 102. Sunstein CR. #Republic: Divided Democracy in the Age of Social Media. Princeton, NJ: Princeton University Press; 2018
  103. 103. Raboy M et al. Broadcasting, Voice, and Accountability: A Public Interest Approach to Policy, Law, and Regulation. Ann Arbor, MI: University of Michigan Press; 2008
  104. 104. Kietzmann JH et al. Social media? Get serious! Understanding the functional building blocks of social media. Business Horizons. 2011;54(3):241-251
  105. 105. Lagu T, Greysen SR. Physician, monitor thyself: professionalism and accountability in the use of social media. The Journal of Clinical Ethics. 2011;22(2):187-190
  106. 106. Vaidhyanathan S. The Anarchist in the Library: How the Clash Between Freedom and Control Is Hacking the Real World and Crashing the System. New York, NY: Basic Books; 2005
  107. 107. Kapoor KK et al. Advances in social media research: Past, present and future. Information Systems Frontiers. 2018;20(3):531-558
  108. 108. Aiken M. Cyber Effect: An Expert in Cyberpsychology Explains how Technology is Shaping Our Children, Our Behavior, and Our Values—and What We Can Do About It. New York, NY: Spiegel & Grau; 2017
  109. 109. Shirky C. The political power of social media: Technology, the public sphere, and political change. Foreign Affairs. 2011:28-41
  110. 110. Wood W, Wong FY, Chachere JG. Effects of media violence on viewers’ aggression in unconstrained social interaction. Psychological Bulletin. 1991;109(3):371
  111. 111. Patton DU et al. Social media as a vector for youth violence: A review of the literature. Computers in Human Behavior. 2014;35:548-553
  112. 112. Kavanaugh AL et al. Social media use by government: From the routine to the critical. Government Information Quarterly. 2012;29(4):480-491
  113. 113. Ji X, Chun SA, Geller J. Epidemic outbreak and spread detection system based on Twitter data. In: International Conference on Health Information Science. Berlin, Heidelberg, Germany: Springer; 2012
  114. 114. Valente TW. Social Networks and Health: Models, Methods, and Applications. Vol. 1. New York: Oxford University Press; 2010
  115. 115. Korda H, Itani Z. Harnessing social media for health promotion and behavior change. Health Promotion Practice. 2013;14(1):15-23
  116. 116. Long JB, Ehrenfeld JM. The Role of Augmented Intelligence (AI) in Detecting and Preventing the Spread of Novel Coronavirus. Journal of Medical Systems. 2020;44:59
  117. 117. Stephenson J. Coronavirus outbreak—An evolving global health emergency. JAMA Health Forum. 2020;1(2):e200114-e200114
  118. 118. Bao Y et al. 2019-nCoV epidemic: Address mental health care to empower society. The Lancet. 2020
  119. 119. Chen X et al. Why students share misinformation on social media: Motivation, gender, and study-level differences. The Journal of Academic Librarianship. 2015;41(5):583-592
  120. 120. Shao C et al. Hoaxy: A platform for tracking online misinformation. In: Proceedings of the 25th International Conference Companion on World Wide Web. International World Wide Web Conferences Steering Committee; 2016. Available from:,%C3%8F%E2%82%AC [Accessed: 23 July 2020]
  121. 121. Wilson M. The Hardest Job in the Silicon Valley is a Living Nightmare. 2018 [09 November 2018]. Available from:
  122. 122. Witt A, Suzor N, Huggins A. The rule of law on Instagram: An evaluation of the moderation of images depicting women’s bodies. UNSWLJ. 2019;42:557
  123. 123. Andreassen CS et al. The relationship between addictive use of social media and video games and symptoms of psychiatric disorders: A large-scale cross-sectional study. Psychology of Addictive Behaviors. 2016;30(2):252
  124. 124. Aardema F et al. Virtual reality induces dissociation and lowers sense of presence in objective reality. Cyberpsychology, Behavior and Social Networking. 2010;13(4):429-435
  125. 125. Meyrowitz J. No Sense of Place: The Impact of Electronic Media on Social Behavior. New York, NY: Oxford University Press; 1986
  126. 126. Skeels MM, Grudin J. When social networks cross boundaries: A case study of workplace use of Facebook and LinkedIn. In: Proceedings of the ACM 2009 International Conference on Supporting Group Work. 2009;5:95-104
  127. 127. Bailenson JN et al. Equilibrium theory revisited: Mutual gaze and personal space in virtual environments. Presence Teleoperators and Virtual Environments. 2001;10(6):583-598
  128. 128. Abril PS. A (My) space of one’s own: on privacy and online social networks. Northwestern Journal of Technology & Intellectual Property. 2007;6:73
  129. 129. Culotta A. Towards detecting influenza epidemics by analyzing Twitter messages. In: Proceedings of the First Workshop on Social Media Analytics. 2010;7:115-122
  130. 130. Kass-Hout TA, Alhinnawi H. Social media in public health. British Medical Bulletin. 2013;108(1):5-24
  131. 131. Brodeur J-P, Dupont B. Introductory essay: The role of knowledge and networks in policing. In: The Handbook of Knowledge-Based Policing: Current Conceptions and Future Directions. Chichester, West Sussex, England: John Wiley & Sons; 2008. pp. 9-36
  132. 132. Kumar S, Shah N. False Information on Web and Social Media: A Survey. arXiv preprint arXiv:1804.08559. 2018
  133. 133. Aronson RH, McMurtrie J. The use and misuse of high-tech evidence by prosecutors: Ethical and evidentiary issues. Fordham Law Review. 2007;76:1453
  134. 134. Alexander DE. Social media in disaster risk reduction and crisis management. Science and Engineering Ethics. 2014;20(3):717-733
  135. 135. Veil SR, Buehner T, Palenchar MJ. A work-in-process literature review: Incorporating social media in risk and crisis communication. Journal of Contingencies & Crisis Management. 2011;19(2):110-122
  136. 136. Wendling C, Radisch J, Jacobzone S. The Use of Social Media in Risk and Crisis Communication. 2013. Available from: [Accessed: 23 July 2020]
  137. 137. Safieddine F, Dordevic M, Pourghomi P. Spread of misinformation online: Simulation impact of social media newsgroups. In: 2017 Computing Conference. IEEE; 2017. Available from: [Accessed: 23 July 2020]
  138. 138. Baker JP. Mercury, vaccines, and autism: one controversy, three histories. American Journal of Public Health. 2008;98(2):244-253
  139. 139. Hobson-West P. ‘Trusting blindly can be the biggest risk of all’: Organised resistance to childhood vaccination in the UK. Sociology of Health & Illness. 2007;29(2):198-215
  140. 140. Clarke CE. A question of balance: The autism-vaccine controversy in the British and American elite press. Science Communication. 2008;30(1):77-107
  141. 141. Link K. The Vaccine Controversy: The History, Use, and Safety of Vaccinations. Westport, CT: Greenwood Publishing Group; 2005
  142. 142. Dixon GN, Clarke CE. Heightening uncertainty around certain science: Media coverage, false balance, and the autism-vaccine controversy. Science Communication. 2013;35(3):358-382
  143. 143. Gross L. A broken trust: lessons from the vaccine–autism wars. PLoS Biology. 2009;7(5):e1000114
  144. 144. Brotherton R. Suspicious Minds: Why We Believe Conspiracy Theories. New York, NY: Bloomsbury Publishing; 2015
  145. 145. Kramer RM, Cook KS. Trust and Distrust in Organizations: Dilemmas and Approaches. New York, NY: Russell Sage Foundation; 2004
  146. 146. Holmberg C. Politicization of the LOW-CARB HIGH-FAT Diet in Sweden, promoted on social media by non-conventional experts. International Journal of E-Politics (IJEP). 2015;6(3):27-42
  147. 147. Jang K, Baek YM. When information from public health officials is untrustworthy: The use of online news, interpersonal networks, and social media during the MERS outbreak in South Korea. Health Communication. 2019;34(9):991-998
  148. 148. Tang L et al. Social media and outbreaks of emerging infectious diseases: A systematic review of literature. American Journal of Infection Control. 2018;46(9):962-972
  149. 149. Baccarella CV et al. Social media? It’s serious! Understanding the dark side of social media. European Management Journal. 2018;36(4):431-438
  150. 150. Allcott H, Gentzkow M. Social media and fake news in the 2016 election. Journal of Economic Perspectives. 2017;31(2):211-236
  151. 151. Media Y. Swine Flu Conspiracy Theories. 2009 [28 April 2009]. Available from:
  152. 152. Today R. Swine Flu, Bird Flu ‘Never Happened’: Probe into H1N1 ‘False Pandemic’. 2018 [29 September 2010]. Available from:
  153. 153. Labs Z. #211 Debate: FEMA Camps Trains Trucks Busses Coffins Swine Flu & Martial Law. 2009 [29 September 2018]. Available from:
  154. 154. Videos G. Google Search: “Swine Flu Epidemic”. 2018 [cited 29 September 2018]. Available from:
  155. 155. Andrews P. China’s new killer virus is mutated SARS & may be one more mutation away from infecting millions. Will it make the lethal leap? 2020
  156. 156. Taylor S. The Psychology of Pandemics: Preparing for the Next Global Outbreak of Infectious Disease. Newcastle, UK: Cambridge Scholars Publishing; 2019
  157. 157. Xiao B, Xiao L. Chinese Scientist Finds “Killer Coronavirus Probably Originated from a Laboratory in Wuhan”. Available from: [Accessed: 23 July 2020]
  158. 158. Xiao B, Xiao L. Smoking gun? Chinese scientist finds “Killer coronavirus probably originated from a laboratory in Wuhan”. SAT. 2020;10:44
  159. 159. Vijaykumar S, Jin Y, Nowak G. Social media and the virality of risk: The risk amplification through media spread (RAMS) model. Journal of Homeland Security and Emergency Management. 2015;12(3):653-677
  160. 160. Marwick AE, Boyd D. I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society. 2011;13(1):114-133
  161. 161. Gerlitz C, Helmond A. The like economy: Social buttons and the data-intensive web. New Media & Society. 2013;15(8):1348-1365
  162. 162. Lewis K, Gonzalez M, Kaufman J. Social selection and peer influence in an online social network. Proceedings of the National Academy of Sciences. 2012;109(1):68-72
  163. 163. Harlow S. Social media and social movements: Facebook and an online Guatemalan justice movement that moved offline. New Media & Society. 2012;14(2):225-243
  164. 164. Pawar M. Resurrection of traditional communities in postmodern societies. Futures. 2003;35(3):253-265
  165. 165. Cao X. What Speech Should Be Outside of Freedom of Expression? 2010. Available from: [Accessed: 23 July 2020]
  166. 166. Langvardt AW. Not on our shelves: A first amendment analysis of library censorship in the public schools. Nebraska Law Review. 1982;61:98
  167. 167. Kaplan AM, Haenlein M. Users of the world, unite! The challenges and opportunities of Social Media. Business Horizons. 2010;53(1):59-68
  168. 168. Clinton H. Internet Rights and Wrongs: Choices & Challenges in a Networked World. US State Department; 2011. Available from: [Accessed: 23 July 2020]
  169. 169. Barberá P. How social media reduces mass political polarization. In: Evidence from Germany, Spain, and the US. Job Market Paper. New York University; 2014. p. 46. Available from: [Accessed: 23 July 2020]
  170. 170. Starvridis J. Convergence: Illicit Networks and National Security in the Age of Globalization. Washington, D.C.: Government Printing Office; 2013
  171. 171. Svete U. European E-readiness? Cyber dimension of national security policies. Journal of Comparative Politics. 2012;5(1):38-59
  172. 172. Aliyev H. Precipitating state failure: do civil wars and violent non-state actors create failed states? Third World Quarterly. 2017;38(9):1973-1989
  173. 173. Chen D. “Supervision by public opinion” or by government officials? Media criticism and central-local government relations in China. Modern China. 2017;43(6):620-645
  174. 174. Enikolopov R, Petrova M, Sonin K. Social media and corruption. American Economic Journal: Applied Economics. 2018;10(1):150-174
  175. 175. Enikolopov R, Petrova M, Sonin K. Do Political Blogs Matter?: Corruption in State-controlled Companies, Blog Postings, and DDoS Attacks. Princeton, NJ: Centre for Economic Policy Research; 2012
  176. 176. Myers DP. Violation of treaties: Bad faith, nonexecution and disregard. American Journal of International Law. 1917;11(4):794-819
  177. 177. Fuchs C. Social Media: A Critical Introduction. Los Angeles, CA: Sage; 2017
  178. 178. Korta SM. Fake News, Conspiracy Theories, and Lies: An Information Laundering Model for Homeland Security. Monterey, California: Naval Postgraduate School; 2018
  179. 179. Ting CSW, Song SGZ. What Lies Beneath the Truth: A Literature Review on Fake News, False Information and More. 2018 [28 October 2017]. Available from:
  180. 180. Fuchs C, Trottier D. Theorising Social Media, Politics and the State: An Introduction, in Social Media, Politics and the State. New York, NY: Routledge; 2014. pp. 15-50
  181. 181. Cohen E. Mass Surveillance and State Control: The Total Information Awareness Project. New York, NY: Springer; 2010
  182. 182. Castells M. Networks of Outrage and Hope: Social Movements in the Internet Age. Malden, MA: John Wiley & Sons; 2015
  183. 183. Fox NJ. Foucault, Foucauldians and sociology. British Journal of Sociology. 1998:415-433
  184. 184. White D. Public Discours: Importance & Strategies. 2018 [28 October 2018]. Available from:
  185. 185. Wolf R. Immigration, Gay Rights, Politics, Abortion, Taxes, Technology: Crunch Time at the Supreme Court. 2018 [28 October 2018]. Available from:
  186. 186. Vaccine Myths Debunked. 2018 [28 October 2018]. Available from:
  187. 187. Richey M. Contemporary Russian revisionism: understanding the Kremlin’s hybrid warfare and the strategic and tactical deployment of disinformation. Asia Europe Journal. 2018;16(1):101-113
  188. 188. Dwoskin E. Facebook Profit Hits an All-time High, Unaffected by Recent Scandals—So Far. 2018 [28 October 2017]. Available from:
  189. 189. WHO. Strengthening Heath Security by Implementing the International Health Regulations (2005): IHR Procedures Concerning Public Health Emergencies of International Concern (PHEIC). 2018 [28 October 2018]. Available from:
  190. 190. Chou W-YS, Oh A, Klein WM. Addressing health-related misinformation on social media. JAMA. 2018;320(23):2417-2418
  191. 191. Briggs CL. Stories in the Time of Cholera: Racial Profiling during a Medical Nightmare. Los Angeles, CA: University of California Press; 2003
  192. 192. Berlant L. Slow death (sovereignty, obesity, lateral agency). Critical Inquiry. 2007;33(4):754-780
  193. 193. Bickerstaff K, Simmons P, Pidgeon N. Situating local experience of risk: Peripherality, marginality and place identity in the UK foot and mouth disease crisis. Geoforum. 2006;37(5):844-858
  194. 194. Papadimos TJ et al. Ethics of outbreaks position statement. Part 2: Family-centered care. Critical Care Medicine. 2018;46(11):1856-1860
  195. 195. Papadimos TJ et al. Ethics of outbreaks position statement. Part 1: Therapies, treatment limitations, and duty to trreat. Critical Care Medicine. 2018;46(11):1842-1855
  196. 196. Andrejevic M. Infoglut: How Too Much Information Is Changing the Way We Think and Know. New York, NY: Routledge; 2013
  197. 197. Shoemaker PJ, Reese SD. Mediating the Message in the 21st Century: A Media Sociology Perspective. New York, NY: Routledge; 2013
  198. 198. Arendt H. The Human Condition. Chicago, IL: University of Chicago Press; 2013
  199. 199. Kickbusch I, Leung G. Response to the Emerging Novel Coronavirus Outbreak. British Medical Journal. 2020;368:m406
  200. 200. Library of Congress Law. Government Responses to Disinformation on Social Media Platforms: Comparative Summary. 2020. Available from: [Accessed: 16 June 2020]

Written By

Keith Conti, Shania Desai, Stanislaw P. Stawicki and Thomas J. Papadimos

Submitted: 13 March 2020 Reviewed: 17 June 2020 Published: 24 July 2020