Open access peer-reviewed chapter

A Theoretical Concept to Increase the Trustworthiness of Online and Offline Debates with Real-Time AI Speech Analytics

Written By

Kevin Koidl

Submitted: 13 March 2021 Reviewed: 18 May 2021 Published: 01 July 2021

DOI: 10.5772/intechopen.98442

From the Edited Volume

Computer-Mediated Communication

Edited by Indrakshi Dey

Chapter metrics overview

319 Chapter Downloads

View Full Metrics

Abstract

Debates are an essential democratic institution in danger by the rise of Social Media. The advent of Fake News often referred to as the ‘crisis of trust’, has led to a substantial increase in debates that blend online and offline. It can be argued that blended approaches are not directly linked to increasing trustworthiness in the debate. To overcome this trust crisis and increase the reliability in debates, we introduce the HELIOSPHERE concept that seeks to use technological advances, such as Artificial Intelligence and Augmented Reality, to create a more fair, inclusive and transparent debate. The critical component for inclusiveness is Augmented Reality technology and 3D camera technology to hybridise the online and offline debating space and ensure that anyone who cannot be present can engage with the debate. For transparency and fairness, a key indicator of trust, an Artificial Intelligence dashboard is introduced to analyse and visualise speaking time, speaker gender, topic relatedness, bias detection sentiment in Real-Time. This work presents the overall theoretical concept focusing on academic and technical concepts to support reliable communication within debates.

Keywords

  • Real-Time AI
  • Speech to Text
  • NLP
  • Analytics
  • communication
  • media
  • physical spaces

1. Introduction

Modern society depends on open and fair debates to shape democracy. For a debate to be successful, it is essential that different viewpoints can be addressed and discussed. This requires fairness and trust. Traditional locations for debates are Town Halls, TV Debates and Universities. Debates guide public policy and serve to increase the legitimacy of measures since they have originated from citizens or are supported by citizen groups [1]. Debates often consist of a group of citizens with a large amount of information, which then deliberate on public policy directions, intending to reach consensus towards specific recommendations [2]. Naturally, citizen groups have been identified as a promising effort to promote deliberative democracy [3]. Research predominantly focuses on such debates and how the participants are transformed through the experience.

“[…] in the long term, deliberative civic engagement efforts could transform not only their participants but also the larger public. Those participating in, engaged with, or captivated by such actions should report stable (or rising) public trust levels and signs of reduced civic neglect” [4].

“[…] in the long term, deliberative civic engagement efforts could transform not only their participants but also the larger public. Those participating in, engaged with, or captivated by such efforts should report stable (or rising) levels of public trust and signs of reduced civic neglect” [5, 6, 7].

In other words, public debates can be considered a remedy to political distrust. Studies focused on how such debates can promote social learning [8], change the participant’s preferences [9]. Such debates are often seen as the most advanced method to institutionalise deliberative democracy [10].

Currently, a general agreement has been reached that small circle debate, also defined as mini-publics, is one component of deliberative democracy [11, 12, 13]. The possibility of utilising the emerging information and communication technologies for new ways of citizen participation since network technologies allow for ease of access to civic involvement in politics [14, 15]. Additional benefits have been identified in terms of democratic discussions among people [16, 17], such as eliminating physical and social barriers that have a restrictive impact on offline mini-publics [18]. Even Supreme Court Justice Anthony Kennedy pointed out that discussions nowadays do not happen in streets and parks and instead happen via electronic media. Therefore, he reiterated the publics ability to participate in discussions would change due to changes in communication technologies [19]. Thus, one of the main challenges is how to ‘translate’ the traditional public forum into a more modern technological environment. Yet, at the same time, preserve the most important ideals of public forums such as insurance that speakers have access to a broad audience, equal time of speaking and that the public has a shared exposure to diverse views and opinions.

Recent events about the global COVID-19 pandemic have proven that in the presence of a worldwide mass lockdown of society for a considerable period, a scalable online deliberative platform would become increasingly more critical for the preservation of democracy and for decision making, which affects both local and global diverse communities and interests. Yet, most research and initiatives on online deliberative publics do not contemplate the effects new media concepts, such as Social Media and online forums, have on how and where debates are conducted. It can be argued that both online and offline deliberation can lead to further polarisation [19]. Specifically, with the advent of Social Media Platforms, the overall debating landscape has resulted in a complex global plethora of constantly changing media interactions affecting the individual citizen. New media experiences that are user-driven new phenomena have emerged, known as Filter Bubbles [20] and Echo Chambers [21]. Both phenomena create a distorted view of the overall reality in which the debate is held. This became very clear during the last US elections in which the primarily east coast based liberal press debated a for them sure candidate, Hillary Clinton, hence creating an Echo Chamber. The debate was biased entirely towards the opinion of the liberal news outlets creating a distorted view of the overall US picture [22]. This phenomenon is propelled by Filter Bubbles, in which content of interest is prioritised, leaving out the range of friends that are of a different opinion [23]. The overall challenge is a constant misunderstanding or artificial bias within online spaces that facilitate debate. On the flip side, however, it is not easy to scale a physical discussion and organise it in a transparent, inclusive and fair manner. About fairness, the concept of bias plays a vital key and is often misunderstood. Biases within debates are inherently necessary because it represents the opinion or value system of the debating parties. However, it is essential for a debate that these biases are known to everyone.

A further challenge in modern digital or physical debates is Fake News. This topic has played a significant role in the last US election and has become known as the Cambridge Analytica scandal. Fake News’s core is beyond simply posting or circulating false news, but the danger lies more in the nuance of its influence. In the form of ads, news articles can subtly influence members of society to vote for a different party and have become known as the Cambridge Analytica scandal [24]. Therefore, it can be argued that Fake News is endangering an open and honest democratic process due to the lack of reflection and debate around the opinions of the members of a democratic society.

Furthermore, it can be argued that the emergence of Deep Fake, which uses high-end AI technology to create a falsified video, which is close to impossible for a human to identify as false. It will lead to even more distrust in media in general and further weaken the public’s trust in the modern media landscape [25]. Similarly, behavioural and attention economics in the digital context shape media content, creating shorter and addictive content rather than a deep and reflective one that requires more time.

This publication introduces the HELIOSPHERE concept to introduce a participant focused, fair, sustainable and technologically advanced debating concept to empower a transparent, inclusive and honest debate. It is about inclusiveness by facilitating a hybridisation of the online and offline, digital and physical, real and virtual. HELIOSPHERE, therefore, forms a conceptual and theoretical base for modern debates that empowered by modern media technology without weakening the core of the discussion: honest, respectful and trustworthy communication between citizens. At its core, HELIOSPHERE empowers online, and offline debates with sophisticated Machine Learning analytics that results in a media value chain that supports the moderation of a discussion to ensure the debate is transparent, inclusive and fair. A pertinent point in the current environment is the ability of HELIOSPHERE to be functional and help citizens during massive societal lock-downs due to its online nature and ability to include people even in the most stringent social distancing environments.

Advertisement

2. Theoretical concept

The HELIOSPHERE is an inclusive, transparent and fair debating platform that addresses the lack of trust in public, online and offline debates to support the democratic process of modern society. It implements an easy-to-apply solution that can be used in any public setting, whether entirely online or as a hybrid concept, both offline and online and with minimal effort. The main component about trust is the AI-supported real-time debate analytics solution, which supports both the moderation and the offline/online audience in identifying and adjusting to elements of debates that create bias, manipulation, monopolisation etc. Participants can share, design and validate the debate with relevant content. HELIOSPHERE utilises Machine Learning models trained on datasets collected from already held debates and speeches that enable the debate to become more transparent and fairer and data gathered from media, political and other resources (see below the data engine section). The platform is not limited to a particular language, border limitations. It includes multilingual real-time modules, Cross-Border Content Rights, Data Privacy embedded from the start, Freedom of Speech to understand how meaningful debates can increase the trust in the political and democratic communication process in modern society.

In other words, public debates can be considered a remedy to political distrust. Studies focused on how such debates can promote social learning [8], change the participant’s preferences [9]. This type of debate is viewed as the most advanced method to institutionalise deliberative democracy [10].

To increase transparency, inclusiveness and fairness during the debate, the HELIOSPHERE visualisation focuses on the following analytics results:

  • Information on the number and duration of male or female contributions may help the moderator to find a balance in this respect.

  • Statements can be weighted according to their overall popularity, based on the results of the analytics before the debate - not to support these statements and to give the impression they would be more plausible but to put the finger on it and give the speaker the chance to react to this fact.

  • Most importantly, the fact-checker provides an analysis of the plausibility of any statement so that the moderator or any participant in the debate can pick up a line and bring it up again to avoid that populists win a debate based on good rhetoric alone.

Sensible guidelines support moderators in making fair use of this information to ensure that they will increase fairness and reason throughout the debate rather than making it easier for any speaker to win an argument through clever manipulation. The HELIOSPHERE system will continue to learn and monitor the topic’s coverage and identify when the time has come to re-open the debate or have a new debate on the subject based on significant recent developments. In the following sections, we describe the platform architecture and its components.

Advertisement

3. The heliosphere architecture

The HELIOSPHERE Engine Architecture is developed in a modular manner to support transparent and inclusive debates [26]. The architecture has four main goals: data collection, machine model training and deployment of the tools, visualisation during and after debates. There are three main parts of the platform: Data Engine, Machine Learning Engine and Customizable Visualisation Engine.

3.1 The heliosphere data engine

The HELIOSPHERE Data Engine is responsible for storing and pre-processing all the collected data, including Data collected from the debates themselves. During the debate, an automatic speech to text module transforms the speech into text. Additionally, data collected from other sources, including related initiatives, historical events, business/academic, political entities, published speeches (video, audio, transcripts), documents from governmental and non-governmental institutions (including UN, UNESCO, EU Council, EU Parliament, National Legislative Bodies, WTO, World Bank, IMF) and NGO’s published data. Data collected from publicly available content from TV and print media, publicly available social media postings (Twitter, Facebook, Reddit, YouTube, Steemit or any relevant or future social media platform) related to the debate topics are pre-processed and stored within the data engine.

Since the data collected is heterogeneous, it requires collecting raw data, which is parsed, pre-processed and standardised to be compliant with reusability and compatibility. The raw data is pre-processed, prepared and annotated before including it in the data storage engine continuously. As such, the technological solution would necessitate a distributed environment, such as Hadoop1 system to provide real-time queries and interactive aggregations even with tens of thousands of data points. The data engine is structured to provide fast (1–2 seconds query access) to the data, requested either by the ML and visualisation engine, third parties through the APIs or other services. Furthermore, specific blockchain smart contracts need to be included in the Data engine to guarantee data privacy.

To mitigate and recognise fake, deep fake information and illegal content, the engine ensures the utilisation of blockchain technology to provide traceability, transparency, and decentralisation. As such, Blockchain implementation offers reliable support for verifying both the content and its source. Different actors, people involved in the debate, can access a public blockchain where data is tagged and can, in turn, define a ‘Debunker Community’ and can give opinions on the content during the debate. These opinions may be registered in the tamper-free, publicly accessible ledger. However, complex queries on the blockchain’s data cannot be directly supported by the blockchain itself due to performance and scalability issues. HELIOSPHERE, therefore, provides an interface between the blockchain and the Data Engine so that the Data Engine can retrieve the data on the blockchain to support complex data analysis efficiently. The Data Engine will also store the result of complex aggregation queries in the blockchain. This ensures the results of the study available to the actors of the debate and immutable.

3.2 The heliosphere machine learning engine

The HELIOSPHERE Machine Learning Engine is responsible for providing the AI models used for various components. The deployment of algorithms/models rely on three main parts - (1) data queried from the Data engine, which are needed for the training and testing phases, (2) the neural and ML models, candidates for each component, and finally (3) the code required to implement everything together. The engine’s iterative nature and its way of functioning - a neural model, is proposed, trained on available data. All suitable candidate models are compared and evaluated, which informs selecting the most suitable one for the task at hand. Then the model is deployed for the next debate or innovation cycle.

The engine utilises both tensorflow2 and pytorch3 options, allowing further Enrichment for the model building (code phase). The models can be accessed through internal API calls or the APIs of the partners. Based on the models and structure, several main components will be available, for instance:

The Speech-to-text component is a real-time component and used during the debate as an automatic tool for closed captioning and improving the speech-to-text in case errors occur during the live transcription. This separates the audio stream into segments of a predefined length with a buffer option for uninterruptible service. Each segment denoising and feature extraction is performed (which comprises the pre-processing phase), leading to the acoustic model generation and the language model. A speaker diarisation tool is used for discovering different speakers and enabling the segmenting of the incoming audio stream into individual speaker profiles. This allows a normalisation of the predictive models for each speaker. Since debates are often situated in a noisy environment, a separate voice frequency from background sounds before submitting it to the speech-to-text engine is identified. The speech-to-text conversion distinguishes between different speakers and currently disregards background music, fast or garbled speech, interruptions (such as applause, crowd cheering, or other speakers butting in). The final output is a textual format saved into the Data engine module with the required annotations for each debate and each participant. The output is also available for visualisation on the dashboard.

The Language Model specifies all word combinations with semantic meaning formed and their probability of occurrence. The Dictionary is required to integrate phonemes and transcriptions of different pronunciations for a word. The level of granularity characterises it for transcription in phonemes.

Speech-to-Speech translation is implemented as a hybrid speech-to-speech system for three main reasons:

  1. There is no sufficient amount of parallel audio data to allow researchers and developers to train efficient end-to-end speech translation systems. Decomposing the speech-to-speech translation tasks into smaller tasks can take advantage of the lower training data requirements for each of the underlying functions compared to the end-to-end model.

  2. Exploiting different components in a distributed fashion is computationally more efficient at training time, allows for better controllability and is easier to upgrade.

  3. A composite system can share components from other subsystems of the HELIOSPHERE ecosystem.

The ASR and Synthesis components are shared with other subsystems of the HELIOSPHERE ecosystem. As such, we will focus on developing the MT system and developing communication protocols with different methods to ensure a coherent speech-to-speech MT component. To potentially synchronise an avatar with the text, intermediate post-processing is conducted to generate a set of visemes and timecode based on the translated text’s phonemes. We will also consider this post-processing as part of the MT component.

The MT component has two objectives: (i) to provide inclusiveness via translation for users that conduct the debates in different languages and (ii) to provide inclusiveness via translation of the debates into English to generate content in the correct language and format for the analytics component. We apply three different MT systems to handle speech (in the form of audio input) and text: (i) a text-to-text bilingual MT to translate from and to English; (ii) a text-to-text multilingual MT that encapsulates multiple languages, including English, aiming to provide translation between language pairs for which bilingual parallel data is not available and (iii) a multimodal, speech-text-to-text translation system that exploits both speech and text to improve the text translation.

HELIOSPHERE exploits neural MT approaches using open and free software, such as OpenNMT4 and Marian5, which provide speech-to-text and multi-source translation. The goal is to improve our models’ efficiency and the architecture of our system to make it suitable for an HPC ecosystem. The third type of the MT-system mentioned above systems conducts a second stage translation similar to automatic post-editing systems. It uses two types of inputs -- speech (user-generated audio) and text (result from ASR or the first-stage translation) -- and produces an improved version of the initial translation. Following positive examples from domain-adapted MT, gender-aware MT, and others, we will develop a context-aware MT conditioned on the debate’s topic. HELIOSPHERE provides additional context information regarding the subject and the speakers that can help the translation system generate better translations. In this way, we will ensure a coherent translation and reduce biases. The MT component has a distributed architecture. It operates in real-time and adapts to traffic through a series of scaling up/down policies that maintain the required number of resources for optimal performance. It is accessed via a set of API calls that allow human users and other components of the HELIOSPHERE ecosystem to interact with the MT component efficiently. This reduces the efforts for connecting the MT component has to the other components of the HELIOSPHERE ecosystem. We envisage a request handling fleet that will listen and store MT requests in a queue; another system will consume requests from the queue and invoke the requested action; once the action is completed, a response will be sent directly endpoint provided with the initial request.

Other components interact with the MT Pipeline via internal API calls. The viseme and timecode post-processing, as any post-processing, are invoked if necessary and is assumed to be a part of the MT action.

Natural Language Processing Component extracts features, including tokenization, word segmentation, Part of Speech (POS) tagging, parsing techniques, named entity recognition, n-gram language model, emotional and sentiment analysis, text/debate summarisation, structural relations modelled using semantic compositionality, K-means clustering, Affinity Propagation, Latent Dirichlet Analysis, Events analysis. Established toolkits such as nltk6, gensim7, SpaCy8, pattern9, and others will be used.

Additional NLP endpoints are specifically targeted towards the real-time analysis of live discussion streams applicable in the HELIOSPHERE platform. These include:

  • a pipeline for unsupervised training of domain-dependent, aspect-based sentiment analysis classifiers: this allows topic-specific sentiment analyzers to be easily pre-trained in advance of a heliosphere-event. During an event, these classifiers will extract and quantify observed opinion-aspect pairs in real-time relevant to the topic of the discussion.

  • stance detection identifies and tracks on which side of the argument actors in the discussion are situated. This not only allows for the visualisation of (possibly shifting trends in) the stance of the participants but also serves to pinpoint bias in the discussion, for example, when sure sides of the argument are given an unproportionate amount of time during the debate (e.g. majority vs. minority voices).

  • level-of-disagreement detection: Internet pioneer and essayist Paul Graham identified seven types of disagreement, which are most often used in online arguments, ranging from name-calling (level 1) to refuting the central point.

The HELIOSPHERE Visualisation Engine’s primary purpose is to provide visualisation and interactivity capabilities to moderators, participators, and audiences. The goal is to ensure transparency and fairness during a debate. Through a customizable visualisation, the analysis generated from the data, implemented through the Machine Learning Engine, is available to the public in real-time. This allows participants to see in real-time the textual representation of their deliberations, how much time each participant spent talking, what are the word frequency (word clouds, n-grams and word co-occurrence) of the conversation, topic detection, point summarisation, graph representation of topics, entities relations and main points, as well as the capability to switch to a different language. Crucially, the customization capabilities are suited to the users’ particular needs - whether moderators, participants, online participants, giving them an easy and personalised set of graphical interfaces, which relies on both the Data Engine and the Machine Learning Engine.

Moreover, to increase inclusiveness, the system provides a hybridization of online and offline participants - via an advanced avatar technology seek to provide bi-directional inclusiveness. Therefore, the HELIOSPHERE platform provides a spatial virtual physical concept that uses avatar technology to include large numbers of online debate participants (scale). Such capabilities become increasingly more critical in times of global pandemics, where offline gatherings of more than two people are prohibited for an extended time.

The immersive multimedia debate concept combines a look around - where all participants, local or virtual, are situated around the same round table and can be viewed by everyone in their positions rather than on opposite sides of a rectangular table - with a look inside - where AI-driven analytics support the addition and verification of insights by analysing a given pool of trustworthy media sources of multiple origins. This guarantees the real-time detection of fake assumptions and bias. The HELIOSPHERE dashboard visualises certain meta-aspects of the debate, signalling preference and contradicting opinions while they are detected. Especially in cases where debutants contradict their assumptions, this ensures participants stick to rational and honest statements and explain a possible change of opinion - after all, a difference of opinions is usually legitimate and often recommendable. Still, it should be treated openly and fairly by both speaker and listener.

With the spread of online interaction possibilities, the graphical representation of users has become ever more ubiquitous. With the origins for on-screen representation of users lying in 1980s computer games, and then spreading to personal icons in 1990s web culture, new messaging apps provide playful personalization as standard features (e.g., “Memoji/Animoji” on iOS, BitMoji on Snapchat, face filters on Instagram). Avatars offer users a sense of anonymity (they are not as recognisable as profile pictures or video chats) while retaining a sense of familiarity and personality to other participants. In a debating or conversational setting, we will use these properties of the medium to facilitate and improve online participation in physical contexts.

When avatars are displayed as audience members on screens, this brings them one step closer to the audience that can look at each other in the eye. Understandably, various modes in which online audiences can be blended in with a physical group of people - be that through 2D interfaces like monitors and screens, or 3D presences using hologram technology or robotics can be utilised to understand the most suitable solution depending on the gathering. Robotic presence is already used in classrooms worldwide to represent a teacher in the home of missing students or distant students in a school. The interaction challenge is explored to find the natural fit for engaging groups of audiences while retaining the possibility for anonymity and keeping the tone of the conversation straightforward and open.

3.3 The heliosphere visualisation engine

Finally, the visualisation engine integrates rich-media of user-generated and broadcaster provided content to empower participants to point to content (host-based, web or social media) to support or debunk arguments within a debate. Moreover, users can participate in validating if statements and content are valid and trustworthy.

To support live debates in hybrid (both online and offline) environments, HELIOSPHERE implements an immersive and interactive experience for an online debate calls for a variety of media elements such as 360° live video, Live video from remote individuals, Live generated closed captions, and automatic subtitles in different languages. This contributes to the immersive experience for Citizen participation and active contribution to a debate.

All interacting components need to be fast and synchronised so that they reach all viewers simultaneously. Since the diverse participants in the debate have differing roles and needs, these elements need to be i) object-based, ii) individually configurable, and iii) have low latency.

It is envisaged that the HELIOSPHERE provides the capability of covering debates on TV as an enhanced and interactive experience. This can be made possible via a central integration system offered by broadcasters. Moreover, data can be made available centrally and accessed directly by all interested journalists and newsrooms with just a few clicks. The use and republishing of the content are free of charge. The new content exchange platform is intended to enable citizens, and the content created is made available via diverse channels.

This can be provided as a recording or as a live debate. In this way, specific topics provided for free by the broadcasters can also be made available. For a debate to be planned, a schedule is to be developed, which allows the debate’s organiser to define the services available for each debate, for example, Dashboard, 360° streaming, Link sharing for Twitter, Xing, LinkedIn, etc.

Advertisement

4. Initial prototype and testing of the heliosphere concept

An initial trial with a basic configuration of the HELIOSPHERE concept was carried out over two events in the Science Gallery Dublin10, Ireland. In both cases, the HELIOSPHERE was part of the Science Gallery Book Club11. The first book club, which was on 26 November 2019, discussed the book ‘Invisible Women: Data Bias in a World Designed for Men’ by Caroline Criado Perez, and the second book club on 25 March 2020 focused on ‘Clearing the Air: the Beginning and the End of Air Pollution’ by Tim Smedley. It is worth pointing out that for the first book club, ‘Invisible Women’ HELIOSPHERE deployed a live 360 camera and language analytics features with an audience of over 20 people participating live and 15 online, for the second Bookclub ‘Clearing the air’ a purely online event with about 13 participants was conducted and no 360-degree camera was used. This was due to the COVID19 pandemic and allowed testing the HELIOSPHERE concept purely online and not hybrid offline and online.

To increase the inclusiveness of all attending participants, the layout was circular (see Figure 1). For each book club, two moderators were active, one primary and one support. During the ‘Invisible Women’ event, the two moderators were seated at a round table, capable of holding up to five people. Three more tables were positioned around the table, each hosting one sub-moderator with three to five participants. For the first 40–50 minutes, the participants at each of the tables discussed the book among themselves and a designated sub-moderator. Once the initial discussion was completed, each table’s sub-moderators joined the main round discussion table with the leading two moderators. Here, a 360-degree live feed camera was placed to include online viewers and participate in the debate. Their comments were relayed to the moderators via an iPad on the main table. For transparency and fairness, the HELIOSPHERE AI analytics component was enabled and displayed on a screen. A microphone in the moderation table’s centre captured the discussion and encoded the audio to the AI module for further analysis. Figure 2 depicts the view from the 360-degree camera during the live stream and debate. The table scene is the audience discussing the book with one of the moderators. The bright screen is set to showcase the debate analytics in real-time. The top left corner presents a control for the camera, so each online participant has a complete view of what is happening in real-time in the room. Additionally, the online audience can ask questions/comments, which are then raised by the moderators and addressed during the discussion.

Figure 1.

HELIOSPHERE spatial concept.

Figure 2.

HELIOSPHERE 360 camera angle.

The AI analytics module used speech-to-text technology to encode the live voice feed in real-time, including the conversation between moderators, the author, the present and the online audience. During and after the debate, several types of analysis were performed. For transparency, the most frequently used words during the entire conversation were displayed live (see Figures 3 and 4 as representations due to the live feed not being captured at these events for privacy reasons).

Figure 3.

Example of real-time keyword extraction.

Figure 4.

Example of real-time keyword extraction.

The moderators understood the general audience attitude during the debates based on real-time sentiment analysis and the emotional disposition during the debate (Figure 5 for Invisible women discussion and Figure 6 for Clearing the Air). In the two particular debates, the sentiment analysis for ‘Invisible women’ mainly was positive. Simultaneously, the topic of pollution and current societal problems emerging for the issue produces slightly more negative sentiments than the invisible women discussion.

Figure 5.

Sentiment example.

Figure 6.

Sentiment example.

To gain a more in-depth overview, we included a graph representation of words and topics. Each debate concept graph is connected to the overall “Heliosphere” of topics, themes, with the possibility of further analysis on the HELIOSPHERE website. Moreover, it included an n-gram analysis for both debates. This allowed us to build go-occurrence networks (graphs). For the ‘Invisible Women’ the words most often associated with the word “women” are indicated, which include “need”, “lot”, “many”, “gained, “educational”, “potential”, “body”, as well as others. For the ‘Clearing the Air’ discussion, the co-occurrence graph is presented in. The lower plot presents the words associated with the word “air”, which include “chemica”, “reaction”, “pollution”, “breathe”, “monitor”, “clear”, “city”, “world”, “quality”, among others.

The creation of n-grams serves several simultaneous purposes. First, it provides a real-time interactive concept map for users to browse and click on each node (word or n-gram) to bring up more detailed information about the entity, concept, word. Through the concept mapping, HELIOSPHERE attempts to level the information accessible to all participants to make more informed and transparent choices/arguments. Moreover, since all participants have access to the same set of facts/data, we reduce false information while enriching the informational landscape. Technically, such information is extracted from the sources described in Section 2 of the current work. When a falsehood is present, it is labelled as such in the concept map, so users have a clear idea of the presented information’s truthfulness (Figure 7).

Figure 7.

Example of entity extraction.

Second, the n-grams provide the initial structure for the argument module, which allows us to track the participants’ position on topics (whether they are for, against or neutral). The stance tracking would be crucial during debates on essential/current social, political or economic issues. The modules serve as an indispensable tool to track the electorate’s mood, thereby creating an instantaneous snapshot in the discourse. Additionally, the module connects to the AIF database through API (programmable application interface) to query AIF argument structures, enriching the debate in real-time while providing ground truth for debates. For instance, within the ‘Clearing the Air’ debate, the pollution due to livestock eating was debated, which the system detects and queries aid to supplement the discussion further (Figure 8).

Figure 8.

AIF argument example structure.

Advertisement

5. Privacy and ethics implications

The core concept of HELIOSPHERE is to overcome the crisis of trust seeded by the use of Social Media to influence and manipulate large parts of society towards opinion forming. HELIOSPHERE, by its architecture, does not rely on storing information or using a centralised architecture to avoid similar pitfalls. The independent infrastructure via small, portable and affordable computer units described above ensures is specifically designed to provide no information needs to be stored or processed on an external server. Therefore, it can be argued that HELIOSPHERE works based on a trust-by design paradigm, which empowers real-time support from the AI approach. To increase ensure, privacy-sensitivity is provided. HELIOSPHERE does not rely on any personal information. The speech to text approach is not designed to identify specific and unique patterns over time but focuses on overall sentiment and terminology usage. There is, therefore, no temporal tracking in place that allows a comprehensive analysis of a specific individual. Ethically the concept has to evolve to ‘explain’ how the information has been collected and summarised. Hence, explainable AI needs to be applied to ensure ethical considerations can be taken into account, such as data decision transparency. Furthermore, it has to be assured that the approach does not evolve to ‘making decisions’ for both the moderator and the participants. Concepts that imply trust at its core are support mechanisms and should not undermine the moderators or participants trust in their judgement.

Advertisement

6. Conclusions and future work

This publication introduces and discusses the HELIOSPHERE concept. The foundations are to support the democratic process by empowering debates, both offline and online. Moreover, the HELIOSPHERE presents a hybrid image in which offline (physical) and offline are blended. To support the discussion, three main dimensions are addressed: transparency, inclusiveness, fairness. To empower openness and fairness, an AI dashboard was presented, including an initial trial. The AI dashboards support both the participants and the moderators to balance the debate based on objective data related to sentiment and most debated topics. Concerning inclusiveness state of the art camera technology such as 360-degree cameras were introduced.

Concerning future work, all three areas, transparency, inclusiveness, and fairness, are to be extended towards the vision presented in the publication’s introduction. Specifically, concerning transparency and fairness, the AI dashboard is being developed and tested towards speaker time detection, speaker gender detection, off-topic detection and bias detection. More advanced technological approaches are being tested concerning inclusiveness, such as avatar technology, to overcome the barrier between online and offline audiences. For HELIOSPHERE, it is essential that not only the online audience is to be included more in the debate via camera, voice and commenting technology but also that the offline (physical) audience are more aware of the online audience, which is currently mainly an image or face on a screen. Using more advanced avatar representations makes it possible to bring the online audience closer to the experience within the space.

A further area that can be extended is to detect sentiment in debate and towards mentioned topics. This allows a moderator to ‘take the heat out of a debate. On the flip side, a moderator can also be informed that the overall debate has slowed down too much and needs to be reignited. Features such as speaking time per gender and other balancing metrics are possible and can also be extended to more sophisticate areas such as bias and off-topic detection.

Finally, it has to be noted that the recent surge in sizeable real-time scale online debating platforms such as Clubhouse12 have become very popular and has reached over 10 M active users weekly13. Competitors such as Twitter and Facebook are rumoured to be developing alternative real-time debating platforms with the same premisses that the conversations are not recorded or post-analysed.

In conclusion, it can be stated that the introduction of the HELIOSPHERE concept forms a solid and foundational concept to blend complex online and offline communication, such as highly interactive debates, and with that support democracy as one of the foundations of society, which is democracy.

Advertisement

Acknowledgments

This work is supported by the ADAPT Centre, funded under the Science Foundation Ireland Research Centres Programme (Grant 13/RC/2106).

References

  1. 1. Nabatchi T (2012) An Introduction to Deliberative Civic Engagement. In: Nabatchi T, Gastil J, Weiksner M, et al. (eds) Democracy in Motion: Evaluating the Practice and Impact of Deliberative Civic Engagement. Oxford: Oxford University Press, pp.3-39
  2. 2. Boulianne, Shelley. “Mini-publics and public opinion: Two survey-based experiments.” Political Studies 66.1 (2018): 119-136
  3. 3. Fung, A. (2003). Survey article: Recipes for public spheres: Eight institutional design choices and their consequences. Journal of Political Philosophy, 11(3), 338-367
  4. 4. Fishkin JS and Luskin RC (1999) Bringing Deliberation to the Democratic Dialogue. In: McCombs M and Reynolds A (eds) The Poll with a Human Face: The National Issues Convention Experiment in Political Communication. Mahwah, NJ: Lawrence Erlbaum, pp.3-38
  5. 5. Gastil J, Knobloch KR and Kelly M (2012) Evaluating Deliberative Public Events and Projects. In: Nabatchi T, Gastil J, Weiksner M, et al. (eds) Democracy in Motion: Evaluating the Practice and Impact of Deliberative Civic Engagement. Oxford: Oxford University Press, pp.205-230
  6. 6. Grönlund K, Setälä M and Herne K (2010) Deliberation and Civic Virtue Lessons from a Citizen Deliberation Experiment. European Political Science Review 2 (1): 95-117
  7. 7. Holm, Søren. “A general approach to compensation for losses incurred due to public health interventions in the infectious disease context.” Monash Bioethics Review (2020): 1-15
  8. 8. Kanra, B. (2012). Binary deliberation: The role of social learning in divided societies. Journal of Public Deliberation, 8 (1), 1
  9. 9. Niemeyer, S. (2011). The emancipatory effect of deliberation: Empirical lessons from mini-publics. Politicsand Society, 39(1), 103-140
  10. 10. Elstub, S. (2014). Mini-publics: Issues and cases. In S. Elstub & P. McLaverty (Eds.), Deliberativedemocracy: Issues and cases (pp. 166-188). Edinburgh: Edinburgh University Press
  11. 11. Bächtiger A, Setälä M and Grönlund K (2014) Towards a New Era of Deliberative Mini-publics. In: Grönlund K, Bächtiger A and Setälä M (eds) Deliberative Mini-publics: Involving Citizens in the Democratic Process. Colchester: European Consortium for Political Research Press, pp.225-245
  12. 12. Mansbridge, J., Bohman, J., Chambers, S., Christiano, T., Fung, A., Parkinson, J., & Warren, M. E. (2012). A systemic approach to deliberative democracy. In J. Parkinson & J. Mansbridge (Eds.), Deliberative systems: Deliberative democracy at the large scale (pp. 1-26). Cambridge: Cambridge University Press
  13. 13. Curato, N., & Böker, M. (2016). Linking mini-publics to the deliberative system: A research agenda. Policy Sciences, 49(2), 173-190
  14. 14. Barber, B.R. (1984), Strong Democracy: Participatory Politics for a New Age, Los Angeles: University of California Press
  15. 15. Dahl, R.A. (1989), Demokratin och dess Antagonister [Democracy and its Critics], New Haven: Yale University Press
  16. 16. Davies, T. and R. Chandler (2011), ‘Online deliberation design: choices, criteria, and evidence’, in T. Nabatchi, M. Weiksner, J. Gastil and M. Leighninger (eds), Democracy in Motion: Evaluating the Practice and Impact of Deliberative Civic Engagement, Oxford: Oxford University Press, pp. 103-134
  17. 17. Baek, Y.M., M. Wojcieszak and M.X. Delli Carpini (2011),‘Online versus face-to-face deliberation: Who? Why? What? With what effects?’, New Media & Society 14(3): 1-21
  18. 18. Dahlberg, L. (2001), ‘The internet and democratic discourse–exploring the prospects of online deliberativeforums extending the public sphere’, Information, Communication & Society 4(4): 615-633
  19. 19. Sunstein, Cass R. # Republic: Divided democracy in the age of social media. Princeton University Press, 2018
  20. 20. Pariser, Eli. The filter bubble: What the Internet is hiding from you. Penguin UK, 2011
  21. 21. Jamieson, K. H., & Cappella, J. N. (2008). Echo chamber: Rush Limbaugh and the conservative media establishment. Oxford University Press
  22. 22. Benoit, W. L., Hansen, G. J., & Verser, R. M. (2003). A meta-analysis of the effects of viewing US presidential debates. Communication Monographs, 70(4), 335-350
  23. 23. Garrett, R. K. (2009). Echo chambers online?: Politically motivated selective exposure among Internet news users. Journal of Computer-Mediated Communication, 14(2), 265-285
  24. 24. Cadwalladr, C., & Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The guardian, 17, 22
  25. 25. Blitz, Marc Jonathan. “Lies, Line Drawing, and Deep Fake News.” Okla. L. Rev. 71 (2018): 59
  26. 26. de Prado, Miguel, et al. “AI Pipeline-bringing AI to you. End-to-end integration of data, algorithms and deployment tools.” arXiv preprint arXiv:1901.05049 (2019)

Notes

  • https://hadoop.apache.org/
  • https://www.tensorflow.org/
  • https://pytorch.org/
  • https://opennmt.net/
  • https://marian-nmt.github.io/
  • https://www.nltk.org/
  • https://radimrehurek.com/gensim/
  • https://spacy.io/
  • https://www.clips.uantwerpen.be/pattern
  • https://dublin.sciencegallery.com/
  • more information can be found under http://heliosphere.social
  • https://www.joinclubhouse.com/
  • https://www.statista.com/statistics/1199871/number-of-clubhouse-users/

Written By

Kevin Koidl

Submitted: 13 March 2021 Reviewed: 18 May 2021 Published: 01 July 2021