Open access peer-reviewed chapter

The Use of Artificial Intelligence to Bridge Multiple Narratives in Disaster Response

Written By

Karla Saldaña Ochoa

Submitted: 22 August 2022 Reviewed: 21 September 2022 Published: 23 January 2023

DOI: 10.5772/intechopen.108196

From the Edited Volume

Avantgarde Reliability Implications in Civil Engineering

Edited by Maguid Hassan

Chapter metrics overview

85 Chapter Downloads

View Full Metrics

Abstract

Disaster response presents the current situation, creates a summary of available information on the disaster, and sets the path for recovery and reconstruction. During the last 10 years, various disciplines have investigated disaster response twofold. First, researchers published several studies using state-of-the-art technologies for disaster response. Second, humanitarian organizations have produced numerous mission statements on how to respond to natural disasters. The former suggests questioning: If we have developed a considerable number of studies to respond to a natural disaster, how can we cross-validate its results with humanitarian organizations’ mission statements to bring the knowledge of specific disciplines into disaster response? To address the above question, the research proposes an experiment that considers both: knowledge produced in the form of 8364 abstracts of academic writing on Disaster Response and 1930 humanitarian organizations’ mission statements indexed online. The experiment uses an Artificial Intelligence algorithm, Neural Network, to perform the task of word embedding––Word2Vec––and an unsupervised machine learning algorithm for clustering––Self-Organizing Maps. Finally, it employs Human Intelligence for the selection of information and decision-making. The result is a digital infrastructure that will suggest actions and tools relevant to a specific scenario, providing valuable information loaded with architectural knowledge to guide the decision-makers at the operational level in tasks dealing with spatial and constructive constraints.

Keywords

  • artificial intelligence
  • disaster response
  • big data
  • Word2Vec
  • self-organizing maps
  • architecture

1. Introduction

After a natural disaster, disaster response is essential. Its assessments present the current situation and summarize information that guides rescue forces and other immediate relief efforts to the site [1]. Currently, many scholars publish research on disaster response, focusing on implementing artificial intelligence to process big data. Examples of the former are: a platform to classify crisis-related communications [2], a model that learns damage assessment and proposes applications [1], an automated method to retrieve information for humanitarian crises [3], or an interface to assist with fast decision-making during humanitarian health crises [4, 5]. In parallel, we observe a growing global presence of humanitarian organizations that publish humanitarian mission statements and declarations. There are more than 4000 organizations active in this area. Examples of large ones include the World Food Program (WFP), Cooperative Assistance and Relief Everywhere (CARE), the International Federation of Red Cross and Red Crescent Societies (IFRC), and Action Against Hunger (AAH) [6]. However, the two narratives do not overlap and often ignore each other’s efforts. To achieve an understanding that leads to a unification of the knowledge produced, one must think of methods to analyze both academic literature and humanitarian mission statements to create a common ground.

Most scientific publications and mission statements of humanitarian organizations are published in text form, which is challenging to analyze using traditional statistical analysis. Nowadays, artificial intelligence algorithms are used to facilitate this process. The branch of research that concentrates on text analysis with these methods is called text mining––a specific branch of data mining. Text mining is the process of analyzing texts to obtain new information from them. It does this by identifying patterns or correlations between terms, thus making it possible to find information that is not explicit in the text. This new information makes it possible to create inferences in different tasks such as classification, clustering, or prediction of texts. Currently, novel studies strengthen their efforts to develop or improve methods for analyzing text data, such as the study by [7], which will serve as a reference for this experiment.

Advertisement

2. Methodology

This article will focus on the late operations in disaster response: building safety assessments, temporary housing, and policy recommendations. The abovementioned tasks will serve as keywords to crawl academic publications and humanitarian organizations’ mission statements to define a specific framework directed toward architectural practices to respond to natural disasters.

2.1 Academic publications

To collect academic publications, the pipeline follows the approach of [8]. In their article, the researchers proposed to work with abstracts. They argue that abstracts communicate information concisely and straightforwardly, avoiding unnecessary words. In contrast with full texts, which sometimes contain negative relationships, writing styles can include complex sentences that require different encoding methods than the ones proposed in the present research.

The abstracts of academic publications will be collected from the ScienceDirect database (https://www.sciencedirect.com), one of the most significant and extensive academic archives of journals and conference proceedings. Using a diverse database such as ScienceDirect is beneficial, as a substantial amount of scientific literature from many fields can be assembled to address architectural practices in disaster response. To start collecting the abstracts of academic publications, an API was used, which required keywords to filter data according to a specific interest. The API has two syntax rules. The first rule, +AND+, searches for pair words, the second, +, determines that the word must be contained in the text. By setting the keywords: Building+AND + Safety+Assessments, Housing+and + Disaster+ANDResponse, and Policy+AND + Recommendations+and + Disaster+AND + Response, 9000 abstracts were collected. The articles are from the last 10 years, from 2010 to 2020.

As suggested by Tsitoyana et al. [8], abstracts that contained in their titles the following keywords: “Foreword,” “Prelude,” “Commentary,” “Workshop,” “Conference,” “Symposium,” “Comment,” “Retract,” “Correction,” “Erratum,” and “Memorial” were removed. Figure 1 shows a word cloud of the collected abstracts, where the main concepts are represented, and an overview of the general intention is graspable.

Figure 1.

World cloud of 8364 abstracts from the field of disaster response.

2.2 Professional humanitarian organizations

Professional Humanitarian Organizations have continuously expanded over the past decades. Worldwide more than 125 million people relying on humanitarian aid—double the number 10 years ago [7]. Commonly, such organizations communicate with their stakeholders via platforms online, sharing their mission statements, aims, and goals. Two of the leading online platforms are Wikipedia and ReliefWeb. Hence, data on humanitarian organizations’ mission statements will be collected from these two web sources.

From Wikipedia, 749 humanitarian mission statements were collected using Wikipedia API under the Category “Humanitarian aid organizations.” The collected texts were translated into English. To collect the text from ReliefWeb, a twofold process was performed. The first step was to collect 1154 links of Humanitarian Organizations from the ReliefWeb web (https://reliefweb.int), a specialized digital service of the UN Office for the Coordination of Humanitarian Affairs (OCHA). Those links were organized under the tag “Organizations.” The second step was to access each retrieved link and parse the source text, searching for tags: “about,” “about us,” “we are,” “who we are,” and “what we do.” Having a particular focus on English texts. After the two-step process, a description of each organization and mission statement was found.

After joining both datasets (Wikipedia and ReliefWeb), an overlapping of 87 organizations was found. This points out the lacking of consistency regarding professional humanitarian aid on the web. Nevertheless, we worked with all the humanitarian organizations found, which sum up 1930 entries. Figure 2 shows the word cloud of the overall dataset.

Figure 2.

World cloud of 1930 mission statements from humanitarian organizations.

2.3 Data processing and encoding

To pre-process the text data, three operations were implemented: first, selection of lower-casing and de-accenting; second, removing stop words; and third, selection of words that are part of a speech (nouns, pronouns, adjectives, verbs, adverbs, prepositions, conjunctions, and interjections).

To encode the text data to numerical vectors, the research uses algorithms for word embedding (Natural Language Processing), a technique that assigns high-dimensional vectors (embeddings) to words in a text corpus, preserving their syntax and semantics. Tshitoyan et al. [8] demonstrated that scientific knowledge could efficiently be encoded as information-dense word embeddings without human labeling or supervision. The algorithm used to transform the text into word embeddings is an Artificial Neural Network called Word2Vec [9, 10], which uses the continuous-bag-of-words (CBOW) method. The algorithm learns the embedding by maximizing the ability of each word to be predicted from its set of context words using vector similarity. The output of Word2Vec is a 50-dimensional numerical vector for each word from the text corpus.

To continue with the experiment, a subsample of 40% of the pre-processed texts (both abstracts of academic writing and mission statements of humanitarian organizations) were fed as training data to a Word2Vec algorithm to use the knowledge acquired from the previous training to create a domain-specific model Word2VecDR (this is call transfer learning). After the Word2VecDR was trained, it was able to encode all texts from the dataset into a numerical representation––every word of the text was assigned 50 numerical vectors. The texts ranged from 15 to 5668 words, having an average of 824 words per text. Therefore, if an abstract contains 100 words, the resulting vector form Word2VecDR is a list of 100 sub-lists with 50 elements each.

Mikolov et al. [9] observed that simple algebraic operations on word embeddings, e.g., vector “King” – vector “Man” + vector “Woman,” results in a vector that is closest to the vector “Queen” concluding that the resulting vector is content-related. Furthermore, researchers have also applied statistical operations such as mean or average values onto a list of word embeddings and had successful results that captured the content of the text (examples of the former can be found in [11, 12]. However, when calculating the mean value or adding each word vector, the resulting vector will be an abstraction (reduced) of its content, hence, losing information. To encapsulate as much information as possible from the list of numerical vectors, the present research proposed to use Higher-Order Statics (HOS). In HOS, mean (X) and standard deviations (s) are related to the first and the second-order moments––one could calculate up to n-order moments. Skewness (ski) can be calculated from the third-order moments of the data distribution, which measures the direction of the tail in comparison to the normal distribution, where Y is the Median.

ski=(XY)sE1

If the resulting number is positive, the data are skewed to the left, leaving the tail pointing to the right side of the distribution. If the resulting number is negative, the tail is on the left side of the distribution. Kurtosis (ki) is the fourth-order moment, which measures how heavy the tails of a distribution are [13], where N is the sample size.

ki=i=1N(XiX)Ns4E2

By applying the fourth moments of HOS to the data, each text is represented by a numerical vector of 200 dimensions or four sublists of 50 dimensions for each HOS moment (mean, standard deviation, skewness, and kurtosis). Two advantages of encoding data with HOS are that, compared with the embedding vectors in deep auto-encoders, the resulting vectors of HOS are meaningful and directly interpretable [14]. Second, by using HOS, the computational time for clustering the text data reduces exponentially since the length of the numerical vector is decreased. Additionally, by using the fourth moments of HOS, each resulting numerical vector encapsulates more information than when using only one statistical value (first or second moment).

2.4 Data representation and clustering

Clustering and representation techniques assist with adequately exploring data collection and identifying clusters of similar information that share similar properties [15]. For example, to cluster text, the following algorithms have been used: Support Vector Machine (SVM) [16], k-means [17], Principal Component Analysis (PCA) [18], and Kohonen Self-Organizing Map (SOM) [19]. A full review of different clustering algorithms can be found in [20]. As shown in the work of [19], the unsupervised ML algorithm SOM [21, 22] has proven to have excellent performance when clustering text data and reducing its dimensionality. Additionally, as presented in the work of [23].

“SOM acts as a nonlinear data transformation in which data from a high-dimensional space are transformed into a low-dimensional space (usually a space of two or three dimensions), while the topology of the original high-dimensional space is preserved. SOM has the advantage of delivering two-dimensional maps that visualizes data clusters that reflect the topology of the original high-dimensional space.”

In the proposed experiment, the algorithm of choice for Clustering is SOM, which takes advantage of both clustering and dimensionality reduction [21, 22]. SOM acts as a nonlinear data transformation in which data from a high-dimensional space are transformed into a low-dimensional space (usually a space of two or three dimensions) while preserving the topology of the original high-dimensional space. Topology preservation means that if two data points are similar in the high-dimensional space, they are necessarily close in the new low-dimensional space, and hence, they are placed within the same cluster. This low-dimensional space, usually represented by a planar grid with a fixed number of points, is called a map. Each node of this map has specific coordinates (xi,1, xi,2) and an associated n-dimensional vector or Best Matching Unit (BMU) in such a way that similar data points in the high dimensional space are given similar coordinates. Moreover, each node of the map represents the average of the n-dimensional original observed data that, after iteration, belongs to this node [14]. SOM is an algorithm that helps navigates a dataset. It represents the many dimensions of a dataset into one or two dimensions, allowing a deep understanding of such a dataset.

Advertisement

3. Results

At this point, the text of the academic publications and the humanitarian mission statements are comparable since both were encoded with the same method (Word2Vec). Therefore, they can be clustered using the SOM algorithm presented in the previous section. The first attempts to unify the narratives showed that the number of texts still creates confusion, as many studies and mission statements do not focus specifically on architectural practices in disaster response. Instead, such clustering has a general approach, making it challenging to select studies or tools to answer questions about spatial or constructive problems. Therefore, we will use the SOM as a first filter to narrow the focus to specific tasks (architectural practices). The SOM algorithm allows us to have an organized view of the data; that is to say, we can make a selection based on a specific approach. Thus, it will enable us to concentrate on the information relevant to the focus of this experiment.

The filtering will be performed on both datasets beginning with the mission statements of humanitarian organizations. For this purpose, the numerical vectors representing the texts from humanitarian organizations were fed as inputs in a 10x10 SOM grid. The algorithm started with an initial distribution of random weights, and over 1 million epochs eventually settled into a map of stable zones or clusters. The output layer can be visualized as a smooth changing spectrum where each SOM node has its coordinates and an associated n-dimensional vector or Best Matching Unit (BMU). For visualization purposes, a color assigned to the weight value (n-dimensional vector) of each BMU and a list of keywords are displayed together. The keywords are the most common terms used in the texts that are clustered in each SOM node. The size of the word represents the number of occasions the word appeared in the group of text. Figure 3 shows the consistency in clustering, such as similar keywords being positioned close to each other. This trained SOM grid can be considered a common ground for the mission statements of humanitarian organizations, which will be called SOM HO.

Figure 3.

The SOM HO is a SOM grid of 10×10 data from the humanitarian organization data.

Although this SOM HO represents data in an organized manner, it also contains data on humanitarian organizations that may not be relevant to this thesis’s focus. As our focus is on architectural practices for natural disaster response, we will perform filtering of the humanitarian organization’s data based on similar interests shared with academic publications. When a new dataset is fed to a trained SOM, each unique data point measures its Euclidean distance to each BMU of the trained SOM. The closer the distance, the better the data point fits that node. When feeding the data on academic literature into the trained SOM HO, this data activates a specific number of cells. Hence, the data we will take for creating our final common ground comes only from the cells activated by the academic publication data. Figure 4 shows the activated cells of the SOM HO, where humanitarian organizations that share a common interest in academic writing are found.

Figure 4.

Shows the activated cells of the SOM HO, where humanitarian organizations that share a common interest in academic writing are found.

After collecting all the activated cells, 1081 humanitarian organizations out of the original list of 1930 were filtered. Some of the selected humanitarian organizations are: The International NGO Safety Organization, Nansen International Office for Refugees, Peoplesafe, Rise Against Hunger, SeedChange, and Association of Assistance Solidarity Supportiveness of Refugees and Asylum Seekers.

A similar filtering process was performed on the data on academic writing––initially 8364. First, the encoded abstracts were fed into a new 2D SOM algorithm of 10×10, where each text was clustered based on the similarity of content (the training procedure was like that in the case of humanitarian organization mission statements)—creating a SOM of academic publications, which we will call SOM AP. The filtering constraint was defined by a sample of publications that will be taken as exemplary text. Those samples of academic publications concentrate on architectural practices for disaster response. Such selected academic publications were fed into the trained SOM AP and found some cells with similar approaches. Out of the 8364 abstracts, 835 were selected. Figure 5 shows the Map of Events AP and the selected cells matching the selected publications.

Figure 5.

SOM 10×10 of abstracts and the selected cells based on specific literature. Resulting in the final filtering of 835 abstracts.

The filtered dataset of humanitarian organizations (1082) was joined with the filtered dataset of academic publications (835), creating a new dataset of 1917 texts. These data were fed as input in a new SOM grid of 10×10 that, after a million iterations, settled into a map of clustered texts or what we will call a final Common Ground (Figure 6). Such a Common Ground joins two discourses regarding disaster response in an organized manner that serves as a ground for articulating informed decisions, which will emerge out of specific requirements and interests.

Figure 6.

Common Ground, a SOM of 10x10 trained with joined data from filtered humanitarian organizations (1082) and academic writing from the field of disaster response (835).

Advertisement

4. Validation

To validate the proposed approach, we will take international news (reports on natural disasters and their impact on the communities) since, after a natural disaster, the first source of information comes from that source. It should be emphasized that any query can be taken, being this something very particular or something descriptive, as in the case of the news. For experimental purposes, the specific interest (query) was extracted from news describing three natural disasters of the last 5 years.

The first one was the 2016 Earthquake of magnitude 7.8 on the Ecuadorian coast:

“A magnitude 7.8 earthquake rocked Ecuador’s coast on April 16, 2016 — killing almost 700 people and leveling homes, schools, and infrastructure. More than 6,000 people were severely injured. The quake’s epicenter was offshore, about 17 miles from the town of Muisne in Manabí province and 100 miles northwest of Quito, the capital. After the quake, more than 700,000 people needed assistance. An estimated 35,000 houses were destroyed or badly damaged, leaving more than 100,000 people in need of shelter. Water, sanitation, and healthcare facilities were also destroyed.” [21, 22]

The second one from the 2019 earthquake of 5.4 magnitude in Costa Rica.

“A magnitude 5.4 earthquake shook much of Costa Rica at 7:33 p.m., according to preliminary data from the National Seismological Network (RSN). The tremor had an epicenter near Arancibia, Puntarenas, which is located about 45 miles northwest of San José and its surrounding Greater Metropolitan Area. RSN indicates the quake was felt throughout the Central Valley, home to nearly three-quarters of Costa Rica’s population. There have not been any immediate reports of substantial damage. “According to preliminary data from the emergency committees, so far there is no report of damage after the perceived earthquake, “said the National Emergency Commission (CNE) in a post. The National Seismological Network has already reported at least one aftershock, which occurred at 7:38 p.m. and had a similar epicenter.” [23]

The third one from the 2019 earthquake with a magnitude of 6.5 in Indonesia.

“A 6.5-magnitude earthquake struck the remote Maluku Islands in eastern Indonesia on Thursday morning, killing at least 20 people. Indonesian officials said the quake, which was detected at 8:46 a.m. local time, did not present the threat of a tsunami. But it was classified as a “strong“ earthquake in Ambon, a city of more than 300,000 people and the capital of Maluku Province. The United States Geological Survey said the epicenter was about 23 miles’ northeast of Ambon. At least 20 people were killed in the quake, the authorities said, including a man who was killed when a building partially collapsed at an Islamic university in Ambon, according to Reuters. More than 100 people were reported injured in the quake, and the authorities said about 2,000 had been displaced from their homes.” [24]

All queries were pre-processed and fed into the trained Word2Vec model to extract its word embeddings. The output of the Word2Vec model was encoded with HOS (see section Data Processing), having a final vector of 200 dimensions for each query. Each query vector was then fed into the final Common Ground, SOM of 10×10, (Figure 7), finding the closest Euclidean distance to a BMU. Figure 7 displays the closest BMU concerning each query vector. For the first case study, the keywords in the selected cell are: community, earthquake, risk, and safety. For the second case study, the keywords in the selected cell are: disaster, community, study, and management. For the third case study, the keywords in the selected cell are: energy, disaster, policy, study.

Figure 7.

On color, the BMU had the closest Euclidean distance to each query vector.

Advertisement

5. Discussion

As explained in the Introduction, this research focuses on operations for disaster response, where information has to be concise and arrives on time for decision-makers at the operational level to make an informed decision. The selected case studies are only a subsample of the different scenarios investigated in this work. The final output for each query was four organizations, and four studies were selected (Figure 8). These texts belong to the set of texts grouped in the BMU assigned to each query.

Figure 8.

A list of organizations and tools from the cluster text that belongs to the BMU closest to each query vector.

A correlation with the specific interest can be observed when analyzing the final output. Let us focus on the first case study. Here we can see that the keywords extracted from the first query (2016 Earthquake of magnitude 7.8 on the Ecuadorian coast) are: earthquake, people, severely, assistance, shelter, health care, facility; and that those assigned to the BMU were: community, earthquake, risk, safety; have several overlapping, which demonstrates the consistency in the clustering. Within the selected cell were various humanitarian organizations and academic publications from different research fields such as health sciences, engineering, and architecture. These are all clustered together as they share similar keywords and concerns. This pattern will always occur; an output will always suggest a list of possible options from different fields of study. Therefore, the final selection, which can be called human supervision, ensures the success of the articulation. The decision-maker decides what kind of information is relevant for the decision-making based on particular concerns. In our case, as we focus mainly on architectural practices, the tools and organizations shown in the final outputs (Figure 8) are the ones with constructive and spatial focus.

The present experiment described a methodology that joined two discourses from the field of disaster response together to create a Common Ground, which then provided a ground for selecting and prioritizing information regarding a specific interest to articulate three final outputs. When working with a data-driven approach, there are often questions about whether the accuracy of the results can be trusted. To avoid this, the research proposed a methodology involving human and artificial intelligence interplay where accuracy is ensured by a series of filters that are user-dependent and secure the specificity of the result.

Advertisement

6. Conclusion and future work

First, several lists of humanitarian organizations indexed on the web contain information that cannot be compared; in other words, most organizations registered in one source are not present in another. To settle on a reliable source, one must navigate and filter through to get a representative number. Second, several approaches from different disciplines share similar keywords, e.g., academic writing from the field of health with the field of building safety assessments. Though both are extremely necessary, their approaches are entirely different. Therefore, even if AI predicts similarities among them, humans must be present to make the final decision and selection.

Additionally, it was found that there is a lack of research on how to integrate AI into a workflow of large-scale disaster response, especially in countries with scarce resources. Therefore, in the future, we should look for ways to apply the proposed or similar methodologies to an ongoing disaster case study to validate the speed and relevance of the results. Besides, it would be interesting if researchers add new discourses to those proposed in this experiment, e.g., social media, as these will bring a new stakeholder approach to disaster response. Also, researchers can examine different ways of encoding text data into numerical vectors. For example, instead of using word embedding, the frequency of words used over time (Google Books Ngram Viewer) can be used, and the results of the present research can be compared with the latter.

To conclude, AI is a problem-solving tool tailored to specific problems. Therefore, in a natural disaster scenario, this type of intelligence can be beneficial. Many problems gather a massive amount of data that require considerable computational power, and there are usually few people available to process this data. Therefore, by joining the strengths of human cognition with the strengths of AI computing, this experiment illustrates a method for creating a Common Ground where we can depict collaboration among humanitarian organizations and researchers around the world to aid an informed response in the aftermath of a natural disaster.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

Advertisement

Notes/thanks/other declarations

To my family, Christian, Antonio, Clarita, Carlos, Carlos Jr. Luis.

References

  1. 1. UNDACU. UN Disaster Assessment and Coordination (UNDAC) Field Handbook. 7th ed. 2018. ReliefWeb. Available from: https://reliefweb.int/report/world/un-disaster-assessment-and-coordination-undac-field-handbook-7th-edition-2018
  2. 2. Imran M, Castillo C, Lucas J, Meier P, Vieweg S. AIDR Artificial intelligence for disaster response. In: Proceedings of the 23rd International Conference on World Wide Web. ACM. 2014. pp. 159-162
  3. 3. Zhang D, Zhang Y, Li Q , Plummer T, Wang D. Crowdlearn: A crowd-ai hybrid system for deep learning-based damage assessment applications. In: 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). IEEE. 2019. pp. 1221-1232
  4. 4. Shamoug A, Cranefield S, Dick G. Information Retrieval for Humanitarian Crises via a Semantically Classified Word Embedding. 2018
  5. 5. Fernandez-Luque L, Imran M. Humanitarian health computing using artificial intelligence and social media: A narrative literature review. International Journal of Medical Informatics. 2018;114:136-142
  6. 6. Bünzli GM, Eppler F, Stewens T, Dinh BH, & JM. The Professionalization of Humanitarian Organizations The Art of Balancing Multiple Stakeholder Interests at the ICRC. (J. J. (Radboud U. Nijmegen), Ed.). Springer International Publishing. 2019. DOI: 10.1007/978-3-030-03248-7 Library
  7. 7. Tshitoyan V, Dagdelen J, Weston L, Dunn A, Rong Z, Kononova O, et al. Unsupervised word embeddings capture latent knowledge from materials science literature. Nature. 2019;571(7763):95-98
  8. 8. Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. 2013a. arXiv preprint arXiv:1301.3781
  9. 9. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J. Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems. 2013b. pp. 3111-3119
  10. 10. Li Q , Shah S, Liu X, Nourbakhsh A. Data sets: Word embeddings learned from tweets and general data. In: Proceedings of the 11th International Conference on Web and Social Media, ICWSM 2017. AAAI Press. 2017. pp. 428-436
  11. 11. Socher R, Perelygin A, Wu J, Chuang J, Manning C, Ng A, et al. Recursive deep models for semantic compositionality over a sentiment treebank, EMNLP 2014. 2014
  12. 12. DeCarlo LT. On the meaning and use of kurtosis. Psychological Methods. 1997;2(3):292
  13. 13. Saldana Ochoa K, Ohlbrock PO, D’Acunto P, Moosavi V. Beyond typologies, beyond optimization: Exploring novel structural forms at the interface of human and machine intelligence. International Journal of Architectural Computing. 2021;19(3):466-490
  14. 14. Hu X, Liu H. Text analytics in social media. In: Aggarwal C, Zhai C, editors. Mining Text Data. Springer: Boston, MA; 2012
  15. 15. Ragini JR, Anand PMR, Bhaskar V. Big data analytics for disaster response and recovery through sentiment analysis. International Journal of Information Management. 2018;42:13-24. DOI: 10.1016/j.ijinfomgt.2018.05.004
  16. 16. Rytsarev IA, Kupriyanov AV, Kirsh DV, Liseckiy KS. Clustering of social media content with the use of BigData technology. Journal of Physics: Conference Series. 2018;1096(1):012085
  17. 17. Mir I, Zaheer A. Verification of social impact theory claims in social media context. The Journal of Internet Banking and Commerce. 1970;17(1):1-15
  18. 18. Pohl D, Bouchachia A, Hellwagner H. Automatic sub-event detection in emergency management using social media. In: Proceedings of the 21st International Conference on World Wide Web. ACM. 2012. pp. 683-686
  19. 19. Lin RTK, Liang-Te CJ, Dai H-J, Day M-Y, Tsai RT-H, et al. Biological question answering with syntactic and semantic feature matching and an improved mean reciprocal ranking measurement. In: Proceedings of the 2008 IEEE International Conference on Information Reuse and Integration. 2008. pp. 184-189
  20. 20. Kohonen T. Self-organized formation of topologically correct feature maps. Biological Cybernetics. 1982;43(1):59-69
  21. 21. Moosavi V. Pre-specific modeling, [Doctoral Dissertation], Eidgenössische Technische Hochschule ETH Zürich, Nr. 22683. 2015. DOI: 10.3929/ethz-a-010544366
  22. 22. Ecuador earthquake. 2016 Available from: https://www.worldvision.org/disaster-relief-news-stories/2016-ecuador-earthquake-facts. [Access: 08/22/2022]
  23. 23. Central Valley of Costa Rica shakes with magnitude 5.4 quake. Available from: https://ticotimes.net/2019/11/24/central-valley-of-costa-rica-shakes-with-magnitude-5-4-quake. [Access: 08/22/2022]
  24. 24. Strong Earthquake Strikes Indonesia, Killing at Least 20. Available from: https://www.nytimes.com/2019/09/25/world/asia/indonesia-earthquake-ambon.html. [Access: 08/22/2022]

Written By

Karla Saldaña Ochoa

Submitted: 22 August 2022 Reviewed: 21 September 2022 Published: 23 January 2023