Open access peer-reviewed chapter

On Structuring of Media Information on Sensitive Issues

Written By

Dora Gelo Čolić

Submitted: 07 July 2023 Reviewed: 18 July 2023 Published: 28 August 2023

DOI: 10.5772/intechopen.1002541

From the Edited Volume

From Theory of Knowledge Management to Practice

Fausto Pedro García Márquez and René Vinicio Sánchez Loja

Chapter metrics overview

16 Chapter Downloads

View Full Metrics

Abstract

Structuring media information on sensitive issues is the optimizing process of information content to integrate the essential tension between two contradictory demands. In this case, it is a demand for openness, in other words, information of public interest. On the other side is the demand for secrecy with regard to the sensitivity of information with possible links to the specific parts of the national security concept. The content of the term public interest is determined by what is widely accepted as essential in terms of the long-term well-being of society and its members. The topic is approached within the framework of semantic information theory in such a way that the information published in the public media is analyzed in accordance with theoretical guidelines in a concrete environment, and the analysis isolates the elements of information that correspond to the given goal and those that do not.

Keywords

  • information
  • relevance
  • media
  • data quality
  • usefulness

1. Introduction

Structuring media information on sensitive issues is the optimizing process of information content to integrate the essential tension between two contradictory demands. One of them is demand for openness, demand to give access to information that is critical for citizens, in other words, information of public interest. The other one is the demand for secrecy with regard to national security.

The concept of public interest and its content could be debatable. However, it is unlikely that subtle differences in the perception would significantly erase the importance of factors such as the relevance of information, its truthfulness, clarity, and coherence of reporting.

“…there is the idea of ‘public interest’ as in the chapter title, used widely and diversely, sometimes misused and escaping an agreed definition except in very general terms (as a notion of the welfare of the public or society as a whole). However, Jay Blumler did suggest (1998) a fairly clear view of what it would mean, in terms of three key features: it refers to a vision of what is good for the many as instituted by some form of legitimate democratic authority (as are certain other functions in society, such as government or the justice system) and having an element of public responsibility as a consequence. It is an idea that requires a long time span, which takes into account the long-term needs and effects, thus of future as well as present generations. In addition, the notion of a public interest has to recognize conflicts of interest and of perspective in implementation; it must permit compromise and adaptation to the realities of the time and place” [1].

Is that a problem worth solving? By simply observing, we can notice lots of lines of force in the public space. However, more often than not, we do not recognize their source or even their mutual interference, goals, or source’s motivation. We might find that all of these are the factors producing or empowering the media noise we witness nowadays. It is not unfeasible to conclude that such media noise could be the reason for the fear that humans feel while deciding whether to believe something they have heard or not. Furthermore, that fear could be reflected in the opposite direction and make people not believe something that they actually should. Nevertheless, to analyze “what is what and who is who is the zoo” is not an easy task.

According to McQuail [2], media theory is a complex structure of socio-political and philosophical principles. He sees it as the knowledge that organizes ideas on the relationship between the media and society. In his classification, among a few other theories, there is normative media theory that deals with the issue of what the media should do rather than what they are really doing. Although the fact that McQuail categorizes theory in a few basic varieties, he adds that, in reality, there are no clean models; rather, what exists in a concrete society is a model combining theoretical elements and media types. He also believes that basic principles of media activity can be isolated as independence, diversity or pluralism, information quality, and the preservation of social and cultural order. He warns that some of these principles are conflicting but that one of the aims of regulation is managing tensions and settling conflicts. The conflict in focus in this research is between secrecy and openness, which we see as something that the national security system is entitled to do vs. the demand of the public as part of the norm of a democratic society. But it is just one of the many characteristics of this specific relationship between the system and the media. As stated by authors Peter Gill and Mark Phythian [3], a part of their relationship can also be marked by a combination of dependence, manipulation, support, and praise. Besides that, the similarity between these two is something that is worth mentioning as well. We can notice while looking at the relationship that both have specific connections with the source, which opens up the possibility of fraudulent and illegal behavior to obtain information that is not accessible to the public by regular means. These appearances [4] are also directly connected to the possible negation of basic principles like independence and information quality.

Can we solve it in the framework of science? We can exaggerate by saying that we will find the one and only solution in the framework of science and that we will resolve all of our problems with regard to media reporting. It is also possible to exaggerate in another direction and argue that the solution is not possible, not by any chance and not under any condition, but both perspectives cannot find a firm foundation if we look at science as something in the middle of those two, something that asks questions, seeks answers, gets partial answers, and tries again.

Is it applicable? It could be, and it will be presented in this paper. Firstly, a theoretical analysis of fundamental terms of information sciences was conducted. The results of the analysis were applied to a certain amount of units. By units, it is meant sentences from the chosen media article. An empirical analysis after the theoretical analysis filtered the applied results in a way that gave them space to live and be accommodated within a certain environment. That resulted in a concretization of the fundamental theoretical terms. The overall result will be shown in this paper, together with the process.

Is it real? In most of the discipline and not rarely in research, we do have that practice-theory relationship problem. We can find the same in this particular research and the discipline we lean on in this research. Information sciences, as in any other discipline, could have a theory that seems not to work in reality and a practical working solution that cannot be confirmed outside of the silent knowledge. According to many sources, gathering information is no longer a problem in today’s world. As per some estimates, open sources contain a high percentage of required information, indicating an increased need to improve the decision-making process, mainly the analysis process [5]. That would be place where practice and theory should constantly meet, making each other real.

As a wide variety of problems in science, engineering, etc., can be posed as optimization problems [6], the goal of a former model, and hopefully an optimization algorithm in the future, is to calculate that right measure or the right language structure of the sentence and to optimize information, naturally following the results from the research process. Secondly, an alternative to optimizing every piece of information (in this research, information equals sentence) is to optimize the whole media article by picking only sentences that fall in the range of calculated values that are determined as those that legitimize publishing based on the scope and goal of the research.

Taking into account the need to have the possibility to change this model, as this particular optimization process must be understood as “learning by doing,” we are thinking of using, for “very” further research, a learning process in the Artificial Neural Network’s (ANN) context, that is, ANNs ability to learn automatically from examples. This can be considered as updating network architecture [6]. With this mode of applying the results, it is especially interesting that the ANN learns underlying rules from the given representative examples.

Advertisement

2. Methodology

The methodology used in this research is mostly inspired by Florian Kohlbacher’s design, which implies applying qualitative content analysis as an interpretation method in a case study. Given the fact that the research in question is initial in nature and its main purpose is the processing of semantic theory of information in the context of its suitability for solving the given research problem as well as related problems, refining terminology, operationalizing concepts, and creating an analytical matrix of relationships between concepts, it is used a case study of the research principle with admixtures of the description principle. The latter is used for the purpose of transparency of empirical analysis, which should contribute to critical observation of the research process and its results.

In accordance with the need to resolve the essential tension that this problem contains and its multi-perspective, the needed methodological framework is the one that will result in the resolution of the problem and all its dimensions so that in future research, these dimensions can be specified more narrowly, and thus more precise in research questions/hypotheses which will potentially result in more precise answers.

The formulation of research questions is the first component of the research design, and in this study, a total of four research questions were asked. The choice of methodological strategy, including the type of methodology and methods, was previously explained, and here it should be emphasized once again that it was decided that, at this moment, it would be counterproductive and contrary to the basic goal of the research to set hypotheses and use any type of standard statistical or other quantitative methodology. For this reason, one cannot talk about a completed model of objective quantification of information but only about testing possible variables. In other words, at this stage, this would be a representation of the model that is aimed for, the model when all variables would be included in the calculation and more strongly modulated.

Consequently, an iterative analysis method is used, and due to that, at this stage is possible to talk only about an approximate solution and not the exact one as well as a gradual reduction of the error after each step. Considering the complexity of the system being processed, it is to be expected that a method of this character and scope will be used in the following research, possibly in a more complex form by narrowing down and determining the number of steps of information analysis and/or by introducing allowed limits of error reduction in the results in the sense of limiting the legitimacy of the results.

Firstly, a theoretical analysis was conducted. The fundamental concepts of information science were analyzed: relevance, usefulness, informativeness, data, and information. The paper presents brief considerations and the results of the theoretical analysis as initial settings for testing in the empirical analysis. Despite the large number of definitions of the fundamental concepts and concepts related to them and the many perspectives of their contents and relations, it was decided to use those that, in terms of their formation, best fit the framework of the specific research. As the goal of the research is to isolate an analytical framework for structuring information of public interest subject to the requirement of secrecy and to examine the adequacy of semantic information theory as a framework for establishing a balance between the requirements of secrecy and openness, the first starting point was to check whether semantic information theory is an appropriate framework for the research problem.

To achieve the subsidiary goal, that is, isolation of a specific preliminary model for testing the described publications in the media, it is examined:

  1. whether truthfulness or informativeness is more important for the relevance of information,

  2. which elements/categories are sufficient for building this model, and

  3. what are the relations between them.

In the second phase, the results of the theoretical analysis were applied to a certain number of research units. They were analyzed by using structure analysis, syntax analysis, grammatical analysis, and narrative analysis. The number of research units for the empirical analysis was not determined in advance, but the analysis lasted until a conclusion was reached on all examined categories and elements, that is, until it was determined by the findings that the final conclusion could not be reached unambiguously.

In this particular research, a total of 34 research units were analyzed. The sentences were selected from the chosen media article, per the defined theme framework and based on the evaluation of the content in terms of participants, activities, events, and venues by convenient sampling from the daily or weekly national newspaper. In the selected article, research units (sentences) were selected as a representative part of the material (article) or a part of the material that was judged suitable for examining a certain category or relation.

With the results of the empirical analysis that filtered the applied theoretical terms, we created a matrix as the meeting point of theoretical terms and their empirical usage. By picking patterns during the matrix analysis, we produced a table containing formulas expressing connections between both empirical and theoretical examined elements.

In the third phase of research, the analysis was done based on the produced table, and the results were recorded. Considering the overall goal of the research, many more units have to be included in order to achieve a bigger number of representative examples.

Matrix results for relevance, usefulness, and average data quality are variables for the publication test. Given that 34 research units were analyzed and that the obtained values ranged from:

R=07.65E1
Kor=0105.65E2
KVg=066E3

The sentence would pass the publication test if cumulative results for R>3.8 and Kor>35.21 or cumulative results for R>3.8 and KVg>22.

The choice is based on the evidence registered in the research. Relevance as one of the underlying variables must be above 0.5 of the range. Usefulness as the second fundamental variable is treated partially differently for the reason that we do not see the information we are processing here as a product that should stimulate the reader’s action or articulated activity, but rather as media reporting with an unspecified goal in the context of the reader’s activity. For media reporting, it is primary that the information is relevant to the highest degree possible, which means that it has a certain amount of correct and precise data and that the ratio of verifiable data in the total number of data it consists of is satisfactory. Usefulness is a category that is no less important in this relationship but has less impact on the publication test for the reasons stated above. Therefore, it was decided that any value above a third of the range makes the information publishable. Data quality as an empirical category that expresses the product of the sum of preciseness and correctness of data with the nominal number of verifiable data in the information is set in the alternative test variables, and its value must be higher than a third of the range for the information to be published.

The decision to introduce an alternative test model in which information, regardless of whether the usefulness is too low, if it has sufficient data quality, will be published, was made in order to develop the trend of openness of the system toward the public, and within the purpose of this research, to find the optimal measure of reporting on specific issues given by this framework in the context of public interest.

It is important to note that the publication test ranges of this preliminary model are determined within the data processed here. In the case of expanding the data set, each new result would potentially change the range and assessment of the information.

In the event that it turns out that standardized quantification is possible to some extent, it is assumed that a very large number of data that could be processed in an automated way would need to be included in that set. However, given the significant number of problems that exist in examining and evaluating information in this way, it is currently more appropriate to test the model on several small sets and analyze the possibilities of solving the underlying problems. Considering the fact that the analysis procedure, which begins with the extraction of data and continues with its characterization is more important than these simple hypotheses of calculations, which are only a reflection of the behavior of the categories in the evaluated information, even without presented preliminary result, expressed numerically, the average reader doing this process could be more acquainted with the value of the given information.

Advertisement

3. Theme framework

Considering the term “information” is often used to represent a variety of things, events, or expressions, it cannot be interpreted as things, events, or expressions themselves unless it is within a context and is understood meaningfully [7], so we put the research in a specific context to examine theory. Theme framework accordingly needs to be defined in order to create a scope for the research of fundamental theoretical terms in information sciences such as relevance, informativeness, and usefulness, and on the other hand, data and information.

The theme framework is related to the national security concept but in its specific nonmilitary aspects, like the inner structural weakness of the system, the vulnerability of the institution, and the vulnerability of the value system related to political stability and social cohesion.

Layeredness as a characteristic of the state’s nature, and especially the specific structures and circumstances of a given state, makes it open to multiple threats. The idea of the state and its ideology are common targets of political threats. This especially comes to the fore when it comes to states where ideas and institutions are divided or opposed because such entities are highly vulnerable in the context of political penetration. However, internal division is an element that is not necessary but only contributes to vulnerability because even entities that are strong and influential can be objects of political threats. They can sometimes descend to a level where it is extremely difficult to distinguish whether it is an exchange of ideas and information or a threat to national security. Here we are talking about the issue of language, religion, and local cultural traditions that can also play a role in the creation of national identity, and in that sense, they can be objects of attack and necessary protection. Since political threats have become increasingly complex in terms of performance, they often do not occur directly and visibly but in such a way that external factors are involved in internal issues and discussions, taking sides that represent goals that are closer to the goal of a particular political intervention. In this sense, there are numerous variants of the process of influence and their goals, from which we can often single out the use of weak states to balance the forces, that is, to determine the patterns of the international order, at a given moment, by more influential or stronger political subjects. It is possible to conclude that the intertwining of types of threats and the scale of the vulnerability of a given society and state is difficult to definitively define unambiguously and in the long term, but it is possible and necessary to develop systems that would minimize the harmful consequences of various attacks, i.e., detect the sources and processes of threats as much as possible [8].

Following the aforementioned points, situations when hybrid threat activity, by applying combinations of tools, targets a state in the legal, diplomatic, information, and political domains are included. Each tool and its combination can target one or multiple domains, or the interface between them, by creating or exploiting a vulnerability or taking advantage of an opportunity [9].

Underlying idea of the theme framework is under the umbrella of finding a balance between democratic standards and registering hybrid threats in a timely manner, as the goal that is in this research considered as the goal of public interest.

Advertisement

4. Theoretical analysis

4.1 General definition of information

The general definition of information (GDI) is an operational standard that defines information as an item that:

  1. is composed of one piece of data or more data that,

  2. is well-composed, and

  3. has meaning.

By well-composed data, we mean data that are composed in accordance with the syntax of a certain system, whereby syntax is seen as a term in a broader sense than linguistics, which includes the determination of form, construction, composition, and structure [10].

A primary characteristic of Floridi’s semantic information, though not self-sufficient, is the meaning in a particular code, system, or language implying compliance with the chosen code, system, or language.

The fundamental nature of the data is where we see x different to y, where x and y are two uninterpreted variables, or in other words, the fundamental nature of data is exhibiting the anomaly.

The GDI endorses the thesis: a datum is a relational entity. It implies taxonomic neutrality due to the fact that nothing seems to be a datum per se, classifying it as a relation. Although we have the slogan “data are relata,” the GDI is neutral with respect to the classification of data with specific relata.

The second implication of the GDI analyzed in this research is typological neutrality, meaning that information can consist of different data types as relata. Although the typology is not yet fixed and standard, it is quite common to have five classifications. Depending on the analysis conducted and the adopted level of abstraction, the same data can be understood as more than one type [11].

Considering the term “information” is often used to represent a variety of things, events, or expressions, it cannot be interpreted as things, events, or expressions themselves unless it is within a context and is understood meaningfully [7], so we put the research in a specific context to examine theory. Therefore, my research results are to be interpreted only as such. There is a strong possibility that the model would work in other conditions, but that would have to be proven in other contexts. Even this model will have to prove itself sustainable, and it is expected that it will grow and change as it is alive. This is also expected because we can anticipate changeable circumstances in which relations exist.

4.2 Data characteristics

To calculate the categories that characterize the information, it is necessary to evaluate the preciseness and correctness of the data that the information consists of.

The issue of assessing correctness and preciseness was solved during the empirical analysis by applying the initial settings for data correctness and preciseness. According to initial theoretical settings, incorrect data are considered those that are destroyed by errors or inconsistencies. In regards to data preciseness theoretically, precis is understood as a measure of the possibility of repeating the collected data.

When one of the implications of the GDI as an operational standard is taken into account, namely the taxonomic neutrality according to which the data are related, it was concluded that the correctness of the data is established when, at the sentence level, there is no contradiction between the data that the information consists of. The contradiction is examined at all levels – syntactic, semantic, and grammatical. In addition to the context, the data can also be examined in relation to external entities.

Due to analyzing data preciseness, the measure of data repeatability was also examined at all previously mentioned levels. It was concluded that precise data in this framework is data whose repetition brings meaningful value, and imprecise data is considered to be one whose repetition, although possible, neither in the first, nor in the second, nor in the tenth attempt does not bring meaningful value. As well as when testing correctness, data preciseness can also be tested in relation to external entities.

In the example:

“The fact that after some time SOKO ZI decided on its own initiative to pay penalties due to the delay of part of the delivery is also surprising.”

Defined context of the media article as whole:

“the state was damaged by the payments under disputed contracts”.

isolated data:

  1. After some time, SOKO ZI decided on his own initiative

  2. SOKO ZI paid the penalties after some time

  3. The delivery was partially delayed

All data were assessed as correct because there is no contradiction between them at any examined level. All the data were assessed as precise because, in relation to the context, it is sufficiently precise that the decision was self-initiated, that the penalties were paid, and why they were paid. However, all the data could be characterized as imprecise in relation to external entities because in the first one, the duration of SOKO ZI deciding on its own initiative was not specified, so accordingly, it was not specified when it paid its penalties. The third data is also not precise because it does not bring content to the delay, i.e., it is not specified whether “partially” refers to the delivery time or the quantity or the quality of the delivery. It was concluded that, in accordance with the criterion of pragmatism, i.e., taking into account the constructive contribution in the practical aspect of the research, they are defined as precise in relation to the context. That is due to greater usefulness in knowing about activities and the reasons for these activities in a defined context than the harm that exists in connection with the inaccuracy of this data.

Defining criteria for the verification of data is very important. It is strongly advised not to keep a fixed number but to constantly update this list. Criteria naturally depend on the data we want to verify, so we started with kinds of self-evident criteria, such as the possibility of confirmation in transparent records, minutes, public reports, publicly verified documents, public registers, and transparent actions of the involved actors, such as direct tv interviews or any other similar direct source, five open sources that confirm information (today is not applicable due to substantial amount of web copying).

During the analysis, we observed a few more. In the examples below.

Data 5 Croatian citizens have started thinking about banks and the banking system that can hide many secrets – verifiable.

Data 6 Croatian citizens have started thinking that the president could lie and be connected to crime if he has an advisor who can lie and be connected to crime – verifiable.

Data 7 Croatian citizens began to think of other things as well – verifiable.

We see that the author used a very general syntagm, “Croatian citizens.” We do not know their age, region in Croatia, education, or even the number of people, so we could apply very simple sociological research, take an undefined sample of Croatian citizens (as here), and try to include 1000 people so the research could be considered legitimate and verify/not verify this data. We can see that different data can be an inspiration for an indefinite number of criteria.

Specific environment characteristics of the data publishing also turned out to be crucial in defining the criteria for the verification. The examples shown here are from the time when the Croatian Act on the right of access to information [12] did not exist in the legal system. As Article 3 of the Act states, “The objective of this Act is to enable and ensure the exercise of the right of access to information and the reuse of information, as granted by the Constitution of the Republic of Croatia, to natural persons and legal entities, through openness and the publicity of the activity of public authority bodies.” Therefore, from the moment the Act is valid in the country, we have many more verifiable data.

The criteria for data verification indirectly, through the category of truthlikeness, strongly influence the calculation of relevance, so they have to be considered carefully. However, it is expected to have changes in the research results, even changes in the model itself, due to environmental changes.

4.3 Relevance

Over time, relevance was explored in different directions, but it is possible to summarize it into four approaches to the nature of relevance. We found that the following emerged: systemic, communicative, situational, and psychological, and we can add to these a fifth or interactive framework based on a layered interaction model of information searching, where interactions include levels or layers. It is considered that there is not only one relevance but an interdependent system of relevances, which dynamically interact with each other within and between different layers or levels and adapt when necessary. A categorization of the manifestation of relevance was proposed, and it was cross-referenced with the system of relevance. It is considered that relevance has a set of general features that characterize its nature, which include:

  1. relation

  2. intention

  3. context

  4. inference by involving an assessment of a relation, often an advanced assessment of the success or degree of enhancement of a particular relation, such as an assessment of some information sought in relation to a context-directed intention

  5. interaction meaning an inference is achieved as a dynamic, interactive process in which interpretations of other features can change depending on understanding [13].

Following this, in this research, we consider relevance between the author of the media article/its content and the reader as the element of efficiency strongly related to the characteristic of data, of which the particular information consists in the specific context. We find it is an unchangeable characteristic from the perspective of the time index. Counting on the fact that average preciseness and average correctness are not changeable, the only change we could expect in this is that of truthlikeness.

4.4 Truthlikeness as the best possible truthfulness

We had not included the concept of truth because we strongly believe that, because of its complexity, generally speaking, and especially in regard to this subject, it would limit the scope of the research and its usefulness. Therefore we are content with the term truthlikeness as the best possible concept one can work with.

As there might be a betweenness relationship among worlds [14] or even a fully-fledged distance metric, we can examine what is closer to the truth. The essence of the likeness approach is that the truthlikeness of a proposition is somehow dependent on the likeness between worlds. Graham discusses three main problems for any concrete proposal within the likeness approach:

  1. an account of likeness between states of affairs – what does this consist of, and how can it be analyzed or defined?

  2. the dependence of the truthlikeness of a proposition on the likeness of worlds in its range to the actual world: what is the correct function?

  3. “translation variance” or “framework dependence” of judgments of likeness and of truthlikeness.

The truthfulness [10] that is a changing measure on this scale is actually the degree of realism of a particular claim or information, its proximity to the real state of affairs. The problem is determining the proximity to truth/reality/information. Considering what was mentioned earlier, it is concluded that the proximity to truth/reality/information could be expressed as truthlikeness that is the proportion of verifiable data from the total number of data.

ST=PP:UBPE4

PP – number of verifiable data.

UBP – total number of data.

and the relevance

R=is+prxSTE5

ST – truthlikeness.

is – correctness of data (total points for all data in the information).

pr – preciseness of data (total points for all data in the information).

4.5 Usefulness

In defining logical relevance, Cooper distinguished relevance and usefulness (utility). Relevance must deal with “what it is about,” and it is finally defined by way of logical implication, while usefulness is a universal term that includes not only thematic connection but also quality, novelty, probability, and many other things. Building on this distinction, Cooper was the first to give an in-depth treatment of utilitarianism instead of relevance as a measure of search efficiency. He built his argument on the assumption that “the purpose of a search system is (or at least should be) to search for documents that are useful, not just relevant. Elaborating further: The success of a retrieval system must ultimately be judged on the basis of comparison of some kind involving costs and benefits” [15]. As in the pertinence point of view, in the pragmatic point of view, the approach was to divide the notion of relevance to differentiate and show that relevance is one thing and pertinence or utilitarianism another, although they are related.

Starting mainly from the distinctions established in Cooper’s work, by examining the sentence as the information carrier, in these specific conditions we look at the usefulness of the information as something that is considered to be in the public interest as it is defined earlier. That means that even when calculation is not very high, the information could be in public interest and, therefore, useful for publication, especially when it comes to exposing corruption that is not easy to verify in the sense of data verifiability.

Kor=KVg:IGPxDxIE6

KV(g) – average quality of data

KVg=PPis+prE7

is – correctness of data (total points for all data in the information).

pr – preciseness of data (total points for all data in the information).

D – time index (D - > 0 day-2 days = 10 points, 3–5 = 9, 6–8 = 8, 9–11 = 7, 12–14 = 6, 15–17 = 5, 18–20 = 4, 21–23 = 3, 24–26 = 2, 27–29 = 1, 30.. = 0) number of days from the zero event day, or connected (thematically related to the zero event) event day to the day of publishing the article.

IGP – total number of nouns, verbs, and adjectives in the information.

I – informativeness

I=10XE8
X=1+SP/PRPE9

SP – secondary data.

PRP- primary data.

Here, a few things could and should be discussed. Why is it that calculating relevance includes truthlikeness, which stands for the proportion of verifiable data from the total number of data, and while calculating usefulness, we use an exact, nominal number of verifiable data? The decision was made due to a discussion on relevance and the possibility of partial relevance, which can be viewed as a range of nuances. By including the principle of relativity in the assessment of relevance [16], it is possible to find a measure that is not determinable in two extremes – yes or no – but also as a sequential value. In this framework, relevance is measured, having as a possible outcome a range within which the calculated values are observed in relation to the highest and lowest values obtained in the analysis. The situational approach of the Syracuse School, which considers the situation, social context, multidimensionality, time dependence, and dynamics as the key elements and properties that characterize the nature of relevance and the processes by which relevance relations are established, also influences our choices. Relevance is “a dynamic concept that depends on the user’s judgments on the quality of the relationship between information and information needs at a certain moment” [18]. By this analogy, the user’s judgments in this research correlate with the defined theme framework as explained earlier. Finally, we concluded that it is important to include the relationship between verifiable data and unverifiable data because it shows the situation in that the information was published in, and in a way, it shows intention, relation, and connection to that social context as well. The time dependence the group is working with while exploring situational relevance was connected to usefulness, and the usefulness is actually situational relevance for some authors.

Clearly, our choice is to divide usefulness and relevance based on previous discussions, using more content of discussed terms rather than the terms themselves trying to make them vivid in the given research framework.

Several things could and should be discussed here.

Why does the calculation of relevance include truthfulness, which indicates the proportion of verifiable data in the total number of data, and the calculation of usefulness dependent on average data quality includes the exact, nominal number of verifiable data? The answer depends on two things. One of them is stated when distinguishing the structure of the average data quality and relevance in the context of decomposition using different coefficients in the formula proposals. Another reason is in the discussion of relevance and the possibility of partial relevance, which can be seen as a series of nuances [16].

Greisdorf also states that relevance is defined very differently, so he singles out Goffman [17], who defines relevance as a measure of information delivered by a document in relation to a query and determines that the connection the document has with the query is not sufficient to determine relevance. Furthermore, since relevance can be defined as a measure of information, it must depend on what is already known, which is a fact that must be recognized in any assessment of the relevance of the document in relation to the query. He concludes that for the assessment of relevance, the connection of the document with the query is not sufficient, but the measure of relevance is also evaluated in relation to the set of documents that contains the document whose relevance is assessed. Therefore, if we include the moment of relativity in the assessment of relevance, there is a possibility to obtain results of measuring in a graduated sense and not in two extremes – yes or no. Consequently, there is a way to find a gradation of relevance. There are many authors who analyze the concept of graded relevance, but also a few who strongly advocate the position that relevance exists or not, and that what supporters of graded relevance call degrees should be categorized in a different way.

The situational approach of the Syracuse School, which considers situation, social context, multidimensionality, temporal dependence, and dynamics as key elements and properties that characterize the nature of relevance and the processes by which relevance relations are established, also influenced the said choice. Relevance is “a dynamic concept that depends on the user’s judgment of the quality of the relationship between information and information needs at a given moment” [18]. Analogously, user judgments in this research correlate with the determinants defined by the topic framework. During the research, it was concluded that it is important to include the relationship between verifiable and unverifiable data as an indicator of the situation in which the information was published and indirectly as an indicator of intention, relationship, and connection with the social context. On the other hand, usefulness as an indicator of the characteristic “being in the public interest” increases with the largest number of verifiable data, regardless of the total number of data.

A significant difference can also be observed in the dependence on the time of publication on which the calculation of usefulness depends, while information, once marked as relevant, if the circumstances that affect the assessment of data verifiability do not change, is a time-resistant, that is, unchanging category in the context of the time index. Consequently, in this research, relevance is considered a category that is strongly related to the features of the data that a particular piece of information consists of in a specific context. It is not viewed as a measure of concordance with the thematic framework but as a measure of the amount of information found in relation to the given thematic framework of the research in which, as previously stated, media articles are selected in relation to activities, persons, and events, that is, as a “measure of the effectiveness of contact between source and destination in the communication process” [13].

4.6 Average quality of data

To calculate the average quality of data, first we must identify the type of data. As we mentioned earlier, the second implication of the GDI that interests us is typological neutrality. It means that information [11] can consist of different types of data as relata, and although the typology is not yet fixed and standard, it is quite common to have five classifications. Depending on the analysis conducted and the adopted level of abstraction, the same data can be understood as more than one type.

  1. primary data – basic data stored in a database

  2. secondary data – they are the opposite of primary data and represent their absence (anti-data) – silence can be very informative

  3. metadata – indicators of the nature of some other data, usually primary data

  4. operational data – data on the operation of a data system as a whole

  5. derived data – data that can be derived from some other data and then we consider these other data as an indirect source

Classifying data, that is, determining the type of data, is done in accordance with the classification in connection with the second implication of GDI, that is, typological neutrality, which means that information can consist of different types of data that are relationships. Data within the framework of typological neutrality are classified in relation to the context of an operating system. The context of the sentence is determined by the content analysis of the article in which it is found, and in this sense it is the operating system in which it functions.

Following the aforementioned parallel, the primary data that is originally the basic data stored in the database in this frame is the data as a set of words that are the core of the context. Secondary data are the opposite of primary data and represent their absence (anti-data), which in this framework means that the data tends to be primary, but the given set of words nevertheless does not provide information, that is, the set of words is an anomaly in relation to the previous state and tends to be the core of the context, but does not provide recognizable content. Derived data is data that can be derived from some other data, and then this other data is considered an indirect source. In this framework, derived data are those that are not an integral part of the context. During the analysis, a problem arose with the adjustment of the term derived data, given that the type of data is determined in relation to the context. However, the solutions during the analysis showed that they were extracted from the information and classified in relation to the context. It was done in that way because it is only in relation to the context that it is possible to determine that they are not part of it. The fact that they exist in information makes them an integral part of the system in a specific way. Accordingly, they, as such, derive from some other source but have a specific function in the system. One of the results showed that they are in the function of supporting the context but that they can also have the opposite effect.

In the sentence example.

“Radoš assumes that the money that Croatia sent to Mostar during the war in 1993 and 1994 was nothing less, in fact, it could only have been greater because war costs money.”

the context is determined:

proving the international character of the conflict in BiH through Croatia’s involvement in that conflict.

and the data are isolated:

  1. Radoš assumes

  2. 1993 and 1994 were war years

  3. Radoš assumes that the money that Croatia sent to Mostar during the war in 1993 and 1994 was no less (information in the previous sentence)

  4. the money could only be bigger (information in the previous sentence)

  5. Radoš assumes that war costs money

and classify

  1. derived – the conclusion that Radoš assumes is derived from another source

  2. primary – temporally determines the conflict in question, the international character of which is proven

    As already mentioned above, given that data classifying is still not standardized, sometimes some data can be classified as multiple types. Given that this data could be metadata if the core of the context was strictly interpreted. The conclusion regarding the specific classification was made as a consequence of the interpretation of the core of the context in terms of temporal and spatial determinants without which the context would be seriously damaged, i.e., as a consequence of the interpretation that proving the international character of the conflict would be incomplete if there were no data on the years of the conflict.

  3. secondary – the data tends to be primary, but the verb “assume” introduces weakness into the set of words, that is, into the data;

    The fact that we learn that the amount “was no less” than other specific amount makes that amount determinable, but at the same time, introduces weakness in the information that tends to be primary. A possible objection to this classification could be due to the fact that the determination of the amount does not necessarily have to be part of the core that proves or does not prove the international character of the conflict through Croatia’s involvement in that conflict, but could be characterized by supporting context. However, the fact that derived data is considered to support the context does not lead to the fact that every set of words whose composition and consequently content can mean supporting the context should be considered derived data. The essential determinant of the derived data is that the data is from another source, and here it is unlikely that data 3 and 4 have a different source than data 2.

  4. secondary-“only bigger” has the same status as “was no less”

  5. derived – if you look at the set of words “war costs” within the sentence to which it belongs, it can be concluded that it is generalized view that any war costs money, so in that sense it is derived from another source. It is not clear from the sentence whether this is Radoš’s data, and further to data 3 and 4, when classification could be done in such a way that this data is classified as secondary. Another possibility is that it is a comment by the author. In the case that it was Radoš’s data, the classification would go in the direction of secondary data because the set of words, apart from representing an anomaly in the fundamental sense, does not bring specific content in this context, so it cannot be primary data in a sense – The war in Bosnia and Herzegovina costs X. Decision on type was eventually done for all the above reasons, especially because of the greater probability that is the general attitude of Radoš or the author, which has its source somewhere outside the core of the context. The functionality of this derived data in terms of context support is questionable.

Following the parallel between the operating system and the context, metadata as indicators of the nature of some other data, usually primary data, require the least adjustments because the definition is directly applied to the scope of this research.

Operational data, which are originally data about the operation of the data system as a whole, in this framework are data that speak about the consequences of the context content, the implementation of the context content and its internal functioning.

In the example sentence.

“The fact is that the contracts for the procurement of various spare parts for Hrvatsko ratno zrakoplovstvo (HRZ) (Croatian Air Force), and the affairs behind them, have led to the fact that today only one combat aircraft and one transport helicopter are flying.”

the context is determined:

the state was damaged by the payments under disputed contracts.

data are isolated:

  1. contracts were concluded on the procurement of various spare parts for HRZ

  2. are having affairs

  3. contracts produced consequences

  4. affairs produced consequences

  5. consequence is that only one fighter plane flies

  6. consequence is that only one transport helicopter flies

1 – primary data.

2,3,4 – metadata – data about the nature of the primary data, which is such that affairs are dragged on, that consequences are produced.

For data 4, the classification is questionable. In the case of narrow interpretation of metadata, and if data 2 is correctly typed as metadata, data 4 would be metadata of metadata. The conclusion is that data 4 is typified as metadata because in the very definition of metadata, it is indicated that metadata is an indicator of the nature of other data, and that it is most often primary data. Therefore, such a definition also allows metadata as the type of data metadata could be an indicator of their nature. In this framework, given that the data is isolated and classified in relation to the context, that metadata 2 is an indicator of the nature of data 1 and that, in terms of content, the affairs are directly tied to the conclusion of the contract, metadata 4 is metadata of metadata 2, and in compared to primary data, it is metadata in the second degree.

5 and 6 – operational data – brings content to the consequences or causes of the content of the context and shows the internal functioning of the context.

After classifying all data in the information, we give points to every data for correctness and preciseness and then add up all the points for correctness and all the points for preciseness so we get the overall result for is and pr. We multiply that sum with the number of verifiable data (PP).

KVg=PPis+prE10

is – correctness of data (total points for all data in the information).

pr – preciseness of data (total points for all data in information).

4.7 Informativeness

The type of data is important for calculating informativeness. Although terminology [11] is not yet standard and fixed, typological neutrality as another implication of the general definition of the information consequently gives us five types of data classification. They are not to be understood exclusively or rigidly because, depending on the sort of analysis and the level of abstraction adopted, the same data may fit different classifications.

I=10XE11
X=1+SP/PRPE12

SP – secondary data.

PRP – primary data.

Taking 10 as the maximum for informativeness as regards the interpretations following the perspective stating that the more the probability of the message decreases, the more its informativeness increases, but when the endpoint is reached, the statement implodes – Bar-Hillel – Carnap paradox (BCP) – this is too informative to be true [10]. Despite the fact we did not include in this research the truth concept but the concept of truthlikeness as the instrument we can work with, we cannot deny that the truth is one of the essential goals of the relevant information. Accordingly, we expect that 1 (or in our case 10) is too informative to be true, so we calculate everything under 10 using x as the changeable element while defining change as the changeable proportion of the secondary data in the number of primary data in the particular information.

By using the number of primary data as the basic data stored in a database and the number of secondary data as the opposite of primary data that represents their absence (anti--data), we tend to confirm secondary data also as a carrier of informativeness. In this research, as Beard [19] writes, “(it) might be said that a system is any entity, conceptual, or physical, which consists of interdependent parts and that a systemic approach consists in trying to look upon things as a whole. We need to be able to appreciate the importance of both ‘detail complexity’ and ‘dynamic complexity’ at the same time.” We were obliged to translate the definitions of particular types of data. At the beginning of the research, we defined types of data with regard to the event (zero events or any following event). Going further, it showed that the more logical choice would be to define them in relation to the context. Sometimes, we noticed overlaps by getting the same or similar results. Nevertheless, the conclusion is that the right analogy would be system-context and not system-event. If we look at the sentence in context and if it is written after the event, known or unknown, it happened in the system, that is, syntax or context. The share of secondary data in the number of primary data should show a relationship between data omissions and data presentations.

A problem we could and should discuss here is how data is actually defined, but for now, we are satisfied with the fact that it was done consistently throughout the whole research. Also, as mentioned before, there are general uncertainties about this neutrality, so we can consider this procedure as one of the ways to test it.

Furthermore, informativeness was shown as a characteristic that does not have much impact on deciding whether a sentence should be published or not. As theories already show, it is an interesting concept for research and can be the source of deeper insight, generally speaking, but we did not attach any major importance to it in the practical aspect of this framework.

Advertisement

5. Example

Within the theme framework, a media article is chosen. One example of matrix analysis (third research phase) will be presented in this paper.

It is important to emphasize that this research has been done in Croatian, so slightly different results are expected regarding the parts inherent to the specific language. It is doubtful, nevertheless, that the coefficient of the variations would be significant. Example presented here have been done in Croatian and here we see only translations of them and not the adaptation to possibly different rules for the analysis in English.

Nacional 518/2005.

“Two investigations against Nobilo”

  1. From the article, one to two sentences will be picked based on the narrative analysis as one or two of them are representative of the article as a whole. Sentences in this research are considered as information. The context of the sentence will be defined simultaneously.

    “The ‘Dubrovačka banka’ affair heralded the beginning of the end of Tudjman’s ten-year rule, but in a way, it was an indication for revealing the affair that followed: ‘Banks and the banking system can hide many secrets’, ‘If the adviser to the president can lie and be related to crime, why could not it be the president himself?’ These are just some of the things that Croatian citizens have started to think about.”

    Context: The Dubrovačka banka affair overthrew the all-powerful HDZ

  2. The next step is defining the zero event as the event that is the motive of the published article. If we talk about serial reporting, it is necessary to register connected events which happen after the zero event and which are the motive for the published article.

    0event: 26.02.1997. A stored secret partnership agreement by which partners were to become majority owners of Dubrovačka Banka without investing a single kuna in shares

  3. Using structural analysis, syntax analysis, and grammatical analysis, we extract data from the information

    Data 1 The Dubrovačka banka scandal signaled the beginning of the end of Tudjman’s ten-year rule.

    Data 2 The scandal was, in a way, a starting point for revealing the scandals that followed.

    Data 3 Banks and the banking system can hide many secrets.

    Data 4 If the president’s adviser can lie and be involved in crime, there are suspicions that the president could be doing the same.

    Data 5 Croatian citizens have started thinking about banks and a banking system that can hide many secrets.

    Data 6 Croatian citizens have started thinking that the president could lie and be connected to crime if he has an adviser who can lie and be connected to crime.

    Data 7 Croatian citizens began to think of other things as well

  4. Counting the total number of verbs, nouns, and adjectives in the information

    IGP = 34

  5. Checking the verifiability of data

    Nowadays the right to access information does exist, which makes more data verifiable. The chosen article is from the beginning of the 2000s when other criteria were applicable and we had to adhere to that. Here we found two unverifiable and five verifiable data.

    Data 1 The Dubrovačka banka scandal signaled the beginning of the end of Tudjman’s ten-year rule – unverifiable.

    Data 2 The scandal was, in a way, a starting point for revealing the scandals that followed – unverifiable.

    Data 3 Banks and the banking system can hide many secrets – verifiable.

    Data 4 If the president’s adviser can lie and be involved in crime, there are suspicions that the president could do the same – verifiable.

    Data 5 Croatian citizens have started thinking about banks and a banking system that can hide many secrets – verifiable.

    Data 6 Croatian citizens have started thinking that the president could lie and be connected to crime if he has an adviser who can lie and be connected to crime – verifiable.

    Data 7 Croatian citizens began to think of other things as well – verifiable

  6. Classifying data

    Data 1 The Dubrovačka banka scandal signaled the beginning of the end of Tudjman’s ten-year rule – primary.

    Data 2 The scandal was in a way a starting point for revealing the scandals that followed – secondary.

    Data 3 Banks and the banking system can hide many secrets – derivative.

    Data 4 If the president’s adviser can lie and be involved in crime, there are suspicions that the president could be doing the sane – metadata.

    Data 5 Croatian citizens have started thinking about banks and a banking system that can hide many secrets – operational.

    Data 6 Croatian citizens have started thinking that the president could lie and be connected to crime if he has an adviser who can lie and be connected to crime – operational.

    Data 7 Croatian citizens began to think of other things as well – operational

  7. Quality of data

    Data 1 The Dubrovačka banka scandal signaled the beginning of the end of Tudjman’s 10-year rule – is.

    Data 2 The scandal was in a way a starting point for revealing the scandals that followed – is.

    Data 3 Banks and the banking system can hide many secrets – is.

    Data 4 If the president’s adviser can lie and be involved in crime, there are suspicions that the president could be doing the same – is, pr.

    Data 5 Croatian citizens have started thinking about banks and a banking system that can hide many secrets – is.

    Data 6 Croatian citizens have started thinking that the president could lie and be connected to crime if he has an adviser who can lie and be connected to crime – is, pr.

    Data 7 Croatian citizens began to think of other things as well – is

  8. Informativeness

    I=10xE13
    x=1+1:1=2E14
    I=102=8E15

Advertisement

6. Results

ST=5:7=0,71E16
R=7+2×0,71=6,39E17
KVg=7+2×5=45E18
Kor=45:34×0×8=0E19
Advertisement

7. Conclusion

The optimal structure of media reporting on sensitive issues related to national security would be the optimal amount of information to integrate the essential tension between two contradictory demands. Although these are preliminary formulas, this research showed that it is possible to think about a certain kind of quantifying information in order to decide on its value. We aimed to do it by using the tools from the information sciences mostly, but also by including basics from other disciplines, such as related philosophy, logics, and language. There are still open questions and a need to go further to the possible extent in order to resolve uncertainties.

Basic conclusions are formed as answers to research questions. 1) Semantic information theory, understood in a broader sense where the segmented parts of the theoretical framework were applied, is the appropriate framework in the described scope for the analysis of the research problem. After the selection of units for future analysis, segmented parts of the theoretical framework were applied. The aforementioned selection was done in accordance with the assessment of purposefulness, i.e., in accordance with the estimated anticipation of the result’s usefulness in the context of the research goal. The value of this framework is most evident in its comprehensiveness, which opens up space for in-depth analysis of the information. The general definition of information is certainly not a precise definition that excludes a lot of items and reduces the definition of the term information to make it precise, but this is a consequence of the nature of the object of definition. It would be unnatural, unscientific, and unnecessary to define any term arbitrarily without one being able to defend it, not only theoretically but also empirically. In this sense, a large number of authors try to encompass the content in different ways described in the theoretical analysis. Each of the definitions considered here has left its mark on the results of analyses. Besides using the general definition of information in this framework, with some adaptation, information can be defined as a subset of data or a subset of data extended by an additional part deduced or calculated or refined from that subset. In the original definition, Fricke [20] defines information as a subset of relevant data together with the results of conclusions derived from that data, so it clearly follows that the subset can be enlarged by additional parts that have been derived or calculated from the subset. In this research, the category of relevance is linked to information, not data, but in its calculation relevance depends on the characteristics of the data it consists of. The definition is therefore adapted to this default, and thus does not lose its value and essence, contained in the word subset. That conclusion emphasizes the cohesive-reductionist property of information.

The compatibility of the theoretical framework and empirically established relationships are manifested in the empirical finding that a sentence does not have to be useful to be relevant. In other words, results were found for the sentence where the usefulness was calculated to be zero, and the relevance to some value above zero, but there is no opposite result. Usefulness directly depends on the time index, and in this sense, the value of information is variable. However, if the time index is greater than zero, its growth is more influenced by the average data quality. Relevance does not depend on the time index; therefore, it is a conditionally unchangeable value of information. A change in its calculation may occur due to an external agent that would have an effect on the verifiability of the data that the information consists of. Informativeness does not depend on the verifiability coefficient and has no significant effect on the test of publication of information. If there is no primary data in the information, its informativeness is not affected by the existence of secondary data. The level of informativeness is positively influenced by a larger number of primary data compared to the number of secondary data in the information. 

Foundations for further research have been established, and depending on one’s interest, investigations could go in different directions. Some of them have already been outlined in the text, and some are yet to be identified, but nevertheless, we have results we can work with in the future by confirming them, improving them, or even negating them.

Advertisement

Abbreviations

R

Relevance of the information

Kor

Usefulness of the information

KV(g)

Average quality of data

GDI

General definition of information

ST

Truthlikeness of the information

PP

Number of verifiable data in the information

UBP

Total number of data in the information

is

Correctness of data in the information

pr

Preciseness of data in the information

D

Time index

I

Informativeness of the information

SP

Secondary data

PRP

Primary data

References

  1. 1. McQuail D. Political communication research in the public interest. In: Coleman S, Moss G, Parry K, editors. Can the Media Serve Democracy? London: Palgrave Macmillan; 2015. pp. 76-86. DOI: 10.1057/9781137467928_7
  2. 2. McQuail D. Mass Communication Theory. 6th ed. Los Angeles, London, New Delhi, Singapore, Washington D.C.: SAGE Publications Ltd., SAGE Publications Inc., SAGE Publications India Pvt Ltd., SAGE Publications Asio-Pacific Pte Ltd; 2005. 632 p
  3. 3. Gill P, Mark P. Intelligence in Insecure World. Oxford: Polity Press; 2018. p. 288
  4. 4. Gelo D. Media freedom and regulation in the context of reporting on national security issues. In: 7th International Conference the Future of Information Sciences INFuture 2019: Knowledge in the Digital Age Zagreb; 21-22 November 2019. Zagreb: Faculty of Humanities and Social Sciences, University of Zagreb, Department of Information and Communication Sciences, FF Open Press; pp. 183-189
  5. 5. Omand D. Reflections in intelligence AnaInlysts and policymakers. International Journal of Inteligence and Counterinteligence. 2020;33:471-482. DOI: 10.1080/08850607.2020.1754679
  6. 6. Jain AK, Mao J, Mohiuddin KM. Artificial neural networks: A tutorial. Computer. 1996;29:31-44. DOI: 10.1109/2.485891
  7. 7. Ma L. Meanings of information. The assumptions and research consequences of three foundational LIS theories. Journal of the American Society for Information Science and Technology. 2012;63:716-723. DOI: doi.org/10.1002/asi.21711
  8. 8. Buzan B. People, States, and Fear: The National Security Problem in International Relations. Brighton, Sussex: Wheatsheaf Books; 1983. p. 262
  9. 9. Giannopoulos G, Smith H, Theocharidou M. The Landscape of Hybrid Threats, a Conceptual Model, EUR 30585 EN. Luxembourg: Publications Office of the European Union; 2021. DOI: 10.2760/44985
  10. 10. Floridi L. The Philosophy of Information. Oxford: Oxford University Press; 2011. p. 405
  11. 11. Floridi L. Philosophical conceptions of information. In: Sommaruga G, editor. Formal Theories of Information: From Shannon to Semantic Information Theory and General Concepts of Information (Lecture Notes in Computer Science 5363). 2009th ed. New York: Springer; 2009. pp. 13-53. DOI: 10.1007/978-3-642-00659-3_2
  12. 12. Act on the right of access to information, Official Gazette of the Republic of Croatia no. 25/13 and 85/15
  13. 13. Saračević T. Prilozi Utemeljenju Informacijske Znanosti (Contributions to Foundation of Information Science). Osijek: Filozofski fakultet; 2006. 266 p
  14. 14. Oddie G. Truthlikeness. The Stanford Encyclopedia of Philosophy. In: Edward NZ, editor. 2016 Edition. Stanford: Metaphysics Research Lab Stanford University; 2016. Available from: https://plato.stanford.edu/archives/win2016/entries/truthlikeness/
  15. 15. Saračević T. Relevance: A review of the literature and a framework for thinking on the notion in information science. Part II: Nature and manifestations of relevance. Journal of the American Society for Information Science and Technology. 2007;58:1915-1933. DOI: 10.1002/asi.20682
  16. 16. Greisdorf H. Relevance, an interdisciplinary and information science. Informing Science. 2000;3:67-72
  17. 17. Goffman W. On relevance as a measure. Information Storage & Retrieval. 1964;2:201-203
  18. 18. Schamber L, Eisenberg MB, Nilan M. S: A re-examination of relevance: Toward a dynamic, situational definition. Information Processing and Management. 1990;26:755-776. DOI: 10.1016/0306-4573(90)90050-C
  19. 19. Beard AN. Some ideas on a systematic approach. Civil Engineering and Environmental Systems. 1999;16:197-209. DOI: 10.1080/02630259908970262
  20. 20. Fricke M. The knowledge pyramid: A critique of the DIKW hierarchy. Journal of Information Science. 2009;35:131-142. DOI: 10.1177/0165551508094050

Written By

Dora Gelo Čolić

Submitted: 07 July 2023 Reviewed: 18 July 2023 Published: 28 August 2023