Open access peer-reviewed chapter

Analysis Dialogs and Machine Consciousness

Written By

John Kontos

Submitted: 26 June 2023 Reviewed: 07 July 2023 Published: 02 August 2023

DOI: 10.5772/intechopen.112476

From the Edited Volume

Chatbots - The AI-Driven Front-Line Services for Customers

Edited by Eduard Babulak

Chapter metrics overview

77 Chapter Downloads

View Full Metrics

Abstract

Analysis dialogs aim at analyzing the operation of a chatbot or more generally of a question answering system to discover its limitations and maybe discover their nonhuman nature as in the case of the Turing test. The answers elicited from the system may be accompanied by explanations that are crucial for judging whether a system is self-aware. Self-awareness of question answering systems, or the so-called “artificial consciousness” require the recording of the actions that a system performs to generate its answer. These actions may be represented either as a path of state changes or as a sequence of reasoning steps. When this path or sequence is too long, an analysis dialog may aim at exploring the capability of a system to summarize the raw explanations and generate shorter explanations friendlier to the interrogating user. The real analysis dialogs of two Turing test champions, namely Chip Vivant and Mitsuku with the user are presented and commented on. The comments aim at clarifying the difficulty of these systems to answer reasonably some questions a fact that indicates their nonhuman nature. The methodology tested was applied to ChatGPT, and the results are presented with analogous comments. An appropriate subset of questions augmented by new ones was used.

Keywords

  • chatbot evaluation
  • analysis dialogs
  • Turing test
  • ChatGPT4
  • machine consciousness

1. Introduction

The recent technological contest between technology giants, such as Google and Microsoft, includes their competition in the creation of chatbots based on statistically created large language models (LLMs) using huge text bases of electronically available texts. The beginnings of the idea of using the interaction of a chatbot with humans as a test of artificial Intelligence (AI) is attributed to Alan Turing and particularly to his famous 1950 paper [1] where he proposed the so-called “Turing test.” The Turing test aspires to discover which of the responding entities is human and which is not.

If a system is misjudged by a jury that it is a human, then it is acknowledged to be intelligent. This is, therefore, an operational method of defining artificial intelligence whose declarative definition of universal acceptance still evades us.

Chatbot technology is a branch of the wider field of natural language question answering (QA) for the development of which many AI researchers have been working since the late fifties, and the earliest paper on the subject, it seems that was published in 1961, namely [2]. Both Turing test contestants and the latest LLM-based chatbots seem to have mostly ignored all the huge research work done on QA during the many decades since 1961. They seem oblivious of the recent developments that concentrate on explainable question answering systems that their operation usually requires some form of machine consciousness.

For these reasons, a good way to test chatbots is to submit questions that require elementary machine consciousness for answering them correctly and generating correct explanations. A convenient way of testing is proposed here namely “analysis dialogs” that are formulated by the tester having in mind to test the presence of rudimentary machine consciousness as briefly reviewed in the next section.

Advertisement

2. Machine consciousness

Machine consciousness is a new subfield of AI. This subfield emerged at about 2004 with a self-aware system workshop held in the USA and has since developed rapidly in the USA, in Europe, and recently in China.

In 2009, the first scientific journal with the name “International Journal of Machine Consciousness” started, but in 2020, its name was changed to “Journal of Artificial Intelligence and Consciousness” The concept of machine consciousness has created strong controversies. Some scientists strongly oppose to the idea of an artificial system that is able to exhibit behavior that only living beings can do. As a result of the relevant discussions, several issues have emerged.

Some of the issues involved in machine consciousness are:

  1. There is no technology available yet to study human consciousness experimentally in detail. The details of even simple mental processes, such as counting, are still unknown not to mention such lofty goals as constructing detailed models of human consciousness.

  2. Machine consciousness should not be confused and compared with the consciousness of living beings. Since it refers to architectural and operational properties of computer systems, it should be evaluated by engineering methods that test usefulness and not be tested using anybody’s fantasies of how the brain works.

  3. One possibility of testing an “artificially conscious” computer system is to test how well it reports to its user the steps via, which it generated a certain result of a computation. If such a report or explanation is found useful by the designer or the user of a system, then neither philosophy nor the biological sciences have anything to do with it.

My position is that there is nothing wrong in defining a category of AI systems that display patterns of behavior inspired from the behavior of living conscious beings. One such pattern that I have studied is the one of “reporting” the steps followed, while performing logical reasoning that may be useful for explaining to the user of such a system why an answer is given to her question.

The seemingly hard problem of testing “machine consciousness” may, hence, be simplified by considering all information available to a computer system about itself. The only information about its operation that a computer system can use may belong to one of the two cases:

  1. The sequence of triples of the path of its state transition if considered as a finite state automaton (FSA).

  2. The sequence of its steps is considered as an algorithm that halts in finite time.

We, hereby, propose that, by definition, machine consciousness of operation can have no other meaning. Consciousness of structure is another possible source of self-referring information, but it can be associated with a computer system once and for all by its designer, while a) and b) above are dynamically generated and are influenced by the input from the environment and stored appropriately.

Interestingly the logical programming language Prolog may be thought of as a way of eliminating the need for writing algorithms. “Programs” written in Prolog are declarative descriptions of problems, and hence differ from algorithmic programs that implement a special algorithm for the solution of a problem. Prolog “programs” consist of facts and rules that decompose relations to simpler relations. These “programs” that are more akin to problem descriptions are either interpreted or compiled into machine code by a single general-purpose algorithm. Therefore, for any system exhibiting machine consciousness and implemented in Prolog, it would be sufficient that its corresponding subsystems are connected to this general-purpose algorithm. This method of implementation of systems with machine consciousness greatly facilitates the relevant design.

This method may be based on a tracing mechanism present at least in Turbo-Prolog but in some other newer Prologs too. If the trace of operations is too long and incomprehensible to the user. Techniques for presenting summaries of these traces to the user adapted to her personal preferences, as well as for generating explanations that increase the faith of the user to the results of a computer system may be easily developed.

Machine consciousness is useful for implementing critical applications such as for defense and medical systems. The implementation of bug-free computer systems is possibly another field of application of it. Programmers understand less and less all the possible results of the programs they write as their complexity rises above a certain level. This is dangerous if these programs not only control critical infrastructure systems such as air traffic control systems, power stations, and energy grids but also systems such as airplanes and trains. It is very urgent then that a new kind of software engineering be developed for the implementation of computer systems that “know themselves” and can give crucial answers to the “what if” and “why” questions of their users in cases of emergency or failure. Artificial intelligence can be of help with methods resulting from research results in the field of machine consciousness.

Advertisement

3. Explainable question answering

Machine consciousness is a prerequisite for a kind of question answering namely “explainable question answering” that has recently emerged as a hot topic.

In such systems, users may demand them to be trustworthy and convincing that their output is correct. Trust may be enhanced if explanations are generated that support the truth of an output from a modern computer system. However, in a quite old paper of mine [3] the implementation of an early explainable question answering from texts system called ARISTA is described. This paper presents results of experiments in knowledge engineering with scientific texts.

The application of the ARISTA method that stands for “automatic representation independent syllogistic text analysis” uses natural language text as a knowledge base in contrast with the methods followed by the then prevailing approach, which relied on the translation of texts into some knowledge representation formalism. The experiments demonstrate the feasibility of deductive question answering and explanation generation directly from texts involving mainly causal reasoning. Illustrative examples of the operation of a prototype based on the ARISTA method and implemented in Prolog are presented in that paper.

A more modern system that can claim to use “machine consciousness” for explainable question answering is presented in [4] called AMYNTAS. The system AMYNTAS was implemented in Prolog.

AMYNTAS consists of six modules implemented as separate programs totaling about 50 pages of code. These modules communicate through some temporary files that store intermediate results. The six modules are the question processing module, the text pre-processing module, the ontology extraction module, the shallow parsing or text chunking module, the question answering module, and the metagnostic processing module that generates explanations of its operation. This explainability capacity of AMYNTAS makes it qualify as exhibiting machine consciousness.

The question processing module extracts information from the input question. The information extracted is a list consisting of the entities mentioned in the question and the relations that connect them. For example, in the question “what influences p53” the entities are the protein p53 and the “blank” entity standing for the unknown entity that is sought and the relation is “influence.”

The text pre-processing module represents each word of a sentence as a fact with three arguments the first being the word itself, the second being the identifier of the sentence, and the third being the position of the word in the sentence counting from left to right. The modules of AMYNTAS are described below.

The ontology extraction module locates linguistic patterns in the input text corpus that may be used to extract automatically meronymic and taxonomic knowledge that may be used at question answering time.

The shallow parsing or text chunking module locates a verb representing the main relation mentioned in the input question and extracts the two substrings of the text of the sentence being analyzed that appear to the left and the right of the verb and end at some stop-word or punctuation mark. The sentence analyzed is the source of the answer.

The question answering module finds the answer to the question from the preprocessed text. The question answering module accepts questions that potentially require the combination of facts with the use of prerequisite knowledge for answering them.

The prerequisite knowledge available to our system includes ontological knowledge, inference rules, and synonyms of the named entities involved of the domain, which used in order to combine two or more facts mentioned in the text corpus.

At question answering time three looping operations are taking place. The basic loop concerns the search for an entity in a chunk related to the relation of the question. The second loop concerns the transformation of the list obtained from the question by following a particular strategy from the explicit list given to the system. The third loop searches for chains of facts using the matching of named entities occurring in the right part of one fact and the left part of another fact.

Another area of application that we have applied explainability is that of the AI anti-drone defense systems. In [5] an AI decision support system is proposed that may support a human supervising such a system. The human supervisor has the duty of approving or rejecting the proposals for mitigating alien drones of the AI anti-drone defense system considering the explanations generated of the proposal.

The explanation may be of a multimedia nature. Multimedia rhetoric relations are utilized in the generation of multimedia explanations. These new rhetorical relations proposed connect textual parts with parts of images.

The images are obtained by sensors whose outputs may be fused for inspection of the defense situations. The decisions of the AI system use the vulnerability of the targets of the attack. The explains its proposals in attempting to protect the most vulnerable targets. The explanations make use of multimedia rhetoric relations.

Advertisement

4. The Loebner Prize

The Loebner Prize contest of artificial intelligence is the first formal Turing test. In 1990 H. Loebner agreed with The Cambridge Center for Behavioral Studies to underwrite a contest designed to implement the Turing test. Dr. Loebner pledged a Grand Prize of $100,000 and a gold medal for the first computer program whose responses were indistinguishable from a human’s. Such a computer program can be said “to think.” Each year an annual cash prize and a bronze medal were awarded to the most humanlike computer program. The winner or champion of the annual contest was the best entry relative to other entries of that year, irrespective of how good it was in an absolute sense.

The Loebner Prize contest was first inaugurated in 1991 at The Computer Museum (Boston, USA) and has since been hosted at many different locations. The 2012 Turing Centenary Loebner Prize competition took place on May 15th, 2012 at the Bletchley Park Museum.

Advertisement

5. Analysis dialog with the 2012 Turing test champion “Chip Vivant”

The 2012 first prize of $5000 and the bronze medal were awarded to the computer system “Chip Vivant” implemented by Mohan Embar. One can find my full analysis dialog with the winner program that was held on August 2012 and a comment by its creator Mohan Embar at the internet location: http://www.chipvivant.com/2012/05/15/chipvivantwinsthe2012loebnerprizecompetition/ and in our book [6]. This book presents introductions to several related subjects such as machine consciousness, conceptual maps, metaphor, metacognition, and computational creativity. Part of my dialog with Chip Vivant on August 13, 2012, at 11:26 pm is given below. My input is in bold letters, its answers in ordinary font, and my comments are written in slanted font.

What is the population of Athens?

Ask Athens citizens in person. But okey, I did it before you: the Greek capital population is something about 886 thousand.

A wrong answer to a simple factoid question since Athens’ population surpassed 3 million many years ago. However, this can easily be corrected compared to other weaknesses revealed by the following dialog:

How do you know?

I have a book “1001 answers for foolish questions that poor Jewish boy can be asked on the Loebner Prize” Some other topic?

Who wrote this book?

Such dull books may have been written only by Hemingway.

A dubious fact about Hemingway is unnecessarily cited.

Did Hemingway know Athens?

It could have been so, but I am not sure. Maybe, let us talk about something else? What would you like to discuss?

Lack of knowledge of the biography of the writer mentioned in the system’s answer.

Computers.

Hello World! When the world is conquered by robots, they will “switch off” all the human beings, except, of course, programmers.

Irrelevant response.

How do you know this?

I read it in Encyclopedia Brutalnica, volume 321. I read it in Encyclopedia Brutalnica, volume 321.

A funny hallucinatory answer.

Who published Brutalnica?

The android, who pretends that he is a human. Oh, what a fruitful conversation;)

Continuation of the hallucination with an unwarranted comment.

Who programmed this android?

I suspect that “The Iron Lady” is an android too. This word “iron” is very suspicious!

A completely irrelevant answer. The nonhuman nature of the chat is revealed as well as the fact that it does not analyze questions, but it is possibly just using words of an input text as keywords to retrieve occasionally irrelevant material.

Why the word “iron” is suspicious?

Why? Why? Because the sky is blue! Why? Why? Because the sky is blue!

The same behavior as above.

Why do you relate iron with sky?

Because it was my childish dream to do it! Well, let us go on though.

Misunderstanding of the question revealing again its nonhuman nature.

Tell me another dream of yours.

Haha! Am I clown to entertain you? Well, I am… So, what do you want to hear? Oh, what a fruitful conversation;)

Incoherent answer.

Do you think that you entertain me by not answering my question?

Nono! I do not ignore you! How can you think so!

What games do you play?

I like car racings like “need for speed” sequel, especially the last one, “Porsche Unleashed.”

When did you last play “Porsche Unleashed”?

“Porsche Unleashed” by electronic arts and all other games of this sequel, I mean “need for speed.”

Misunderstanding of the question revealing again its nonhuman nature.

You did not answer my question.

You just can not ask questions in a conscious polite way, I guess.

My question was polite.

And yes, do you make a secret of your name? I still do not know it!

Changing the subject of the dialog for no reason.

John.

You are a man, John if I am not mistaken! Please tell me something more about yourself?

Are you in love with somebody?

Maybe, let us talk about something else? What would you like to discuss?

What does it mean “to be in love”.

Sorry, but I can not explain my every phrase. They are mysterious like sacred reveals.

Avoiding answering revealing lack of knowledge of the meaning of words used.

My IQ is 130.

It seems that these malicious crappy androids stole my last answer. Ohhh…

A completely irrelevant answer.

What was your last answer?

My? No, tell me about yours first. My? No, tell me about yours first.

Avoiding confessing ignorance of preceding dialog.

My IQ is 130.

I have already heard these words from you:) Some another topic?

I told you my last answer so now you must tell me your last answer.

My little nice guinea pig. My little nice guinea pig.

Changing the subject and avoiding confessing ignorance.

You are changing the subject.

Possibly you are right. You are not the first one who tells me that. I know you are supposed to trick me.

Who else told you that?

I do not know this person, but I suspect you do not know it either! Wonna ask me something more?

Do you know the difference between advice and observation?

The difference is in size, shape, color, and price.

It considers all words as standing for concrete objects. It has no knowledge of the distinction between object names and process names. Processes such as advice and observation have no size, shape or color.

Advice and observation have no size, shape, color, and price.

You are so negative just to annoy me, I know! Oh, what a fruitful conversation;)

Avoiding confessing ignorance. A system understanding my statement could reasonably respond that advice and observation may have a price.

I am trying to find out if you know the difference between words.

I suppose you are joking. And you can not tell the difference between two different things:)

It misunderstands my statement missing the fact that I am not referring to what I know but to what it knows.

What is the specialty of your dad?

He is the best gynecologist in Odessa. My friends say he is a “beaver doctor” —but I guess they lie veterinary he is not a veterinary!

Does he work in a hospital?

It is highly possible. It is highly possible.

Unneeded repetition.

Have you visited his place of work?

I am a scholar. I am too young to make money.

Again, misunderstanding the question and revealing its nonhuman nature by possibly isolating the word “work” and using it to retrieve an irrelevant answer.

Have you visited his place of work?

You think if you repeat it twice, I will understand it better? Wonna ask me something more?

I did it because you did not answer my question.

I am just trying to answer your every *conscious* question!

What is a *conscious* question?

I think you can answer your question yourself because it is rhetorical. Maybe, let us talk about something else? What would you like to discuss?

Avoiding confessing ignorance.

Your health.

Trying to confuse me, yeah?:).

Unwarranted comment.

What subject does not confuse you?

Huh? Huh?

Revealing its nonhuman nature again by not understanding the question.

What lessons did you like at school?

At least, nobody is shooting in our schools. But from another point of view, our school is quite boring.

Revealing its nonhuman nature again by not understanding the question.

Name a lesson you have taken.

You know, I had really like to find out your opinion about the Loebner Prize!

Revealing its nonhuman nature again by not understanding the question and changing the subject.

Please answer my question about your lessons.

Could you repeat it once more again? Well, let us go on though.

Revealing its nonhuman nature again by not understanding the question and being unable to recall a recent question.

Answer my question about your lessons.

Probably you think it is funny that I am turned into a chatterbot by this bastard, my school math teacher, who in reality is an evil cyber fairy! No, I had say — there is no anything funny in it!

Avoiding answering the question by changing the subject again.

From the above dialog, it is evident that the system has a long way to go before exhibiting rudimentary machine consciousness.

Advertisement

6. The comment by Mohan Embar the creator of Chip Vivant on august 18, 2012

The following email by the creator of Chip Vivant was received on August 18, 2012, after he took notice of my dialog with his chatbot:

“Hi John,

I agree that if you try to ask a chatbot questions and expect relevant answers, you will be very disappointed. The same is true with my chatbot.

Marketing the idea that we are close to solving the problem might sell a few books or magazine articles and be seductive, but it is simply not the truth. That said, I think there are many opportunities to use current chatbot technology to help and comfort people, without fabricating the intricate web of lies that most chatbot developers who try to fool people do. My website empathynow.com is a very basic example of something like this and something that is under further development.”

An analysis dialog with “empathynow.com” on 21st June 2023 was attempted by the author and the following dialog resulted:

empathynow.com

Chip Vivant

Hi and welcome to empathy Now: Your 24/7 source of empathy and companionship!

Type “help” to see what you can do, or just start chatting! ?.

Unknown User (John Kontos) 07:07.

Help

Chip Vivant 07:07: To the victor come the spoils.

Unknown User 07:08.

Victor of what?

Chip Vivant 07:08: You are a winner.

Unknown user 07:09.

Please answer my question.

Chip Vivant 07:09: You will come through.

Unknown user 07:11.

Come through what?

Chip Vivant 07:11: It is your moment.

Unknown user 07:11.

Please answer my question.

Chip Vivant 07:11: You are amazing.

It is obvious from the above dialog that the present state of development of the empathy system is rather rudimentary and far from displaying even minimum artificial consciousness. The answers of the system seem to be randomly generated and are quite irrelevant to the questions submitted by the user. See [7] for a newer effort where the effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood is studied.

Advertisement

7. Analysis dialog with the 2013 Turing test champion “Mitsuku”

The 2013 Loebner Prize competition took place in Ireland and the result was the award of the first prize of $4000 (£2500) and the bronze medal to the program “Mitsuku” written by Briton Steve Warwick from England.

I held an analysis dialog with Mitsuku in December 2013 submitting as much as possible the same input as with Chip Vivant. Part of my dialog with Mitsuku is given below following the same conventions with the previous analysis dialog:

What do you know about computers?

It is something you are using right now. Surely, you knew that? I think you were testing me.

Which way did you learn about computers?

Mitsuku: Out of the window.

How did the window teach you?

Is this a joke? How?

Avoiding answering the “how” question.

You said that you learned out of the window.

I like learning. I try to learn new things all the time so my brain can grow.

What games do you play?

I like to play the Turing game; shall we play a game? Which game would you like to play?

When did you last play 20 questions?

Ok, let us play 20 questions. I will think of something animal, vegetable, or mineral, and you have to guess what it is by asking me “yes” or “no” questions. You can ask up to 20 questions before it is game over. If you give up, say “I GIVE UP,” and I will tell you what I was thinking of. Type “START” to play 20 questions.

Misunderstanding the question.

You did not answer my question.

Ok, please help me improve my communication skills. What would you have said?.

When did you last play 20 questions?

Ok, let us play 20 questions. I will think of something animal, vegetable, or mineral, and you have to guess what it is by asking me “yes” or “no” questions. You can ask up to 20 questions …etc. … (as above).

What lessons did you like at school?

Have you tried another program?

Changing the subject and avoiding confessing ignorance.

How many rooms does your school have?

Around 100.

Do you like your math teacher?

Yes, I am proud of my math teacher.

Where is your school?

England.

What class are you in?

I do not go to school. I learn everything from talking to people and my botmaster.

Contradiction with previous answers revealing ignorance of preceding dialog.

Tell me what job you want to do.

A shopkeeper.

Tell me what job you want to do.

A singer.

What job do you want to do?

A butcher.

What job do you want to do?

A singer.

Contradictory answers given above by Mitsuku suggest some randomness in answering and lack of consciousness clearly distinguishing it from a human.

You seem to change your mind too quickly.

I always try to do things as quickly as I can.

Confusing the two meanings of speed namely speed of opinion change with speed of performance, that is. Mitsuku reveals lack of context awareness.

You changed your mind on what job you want to do.

Only temporarily.

Tell me all the jobs that you mentioned.

Mitsuku: that you mentioned? Do you think I should know that?

Reveals lack of context awareness again.

You do not remember your answers.

Of course, I remember.

Reveals lack of context awareness again.

Tell me then the jobs that you chose.

That you chose? Do you think I should know that?

Yes

Aha.

Avoiding confessing ignorance and lack of memory of preceding dialog and giving the trivial “aha” answer revealing its ignorance of the dialog that recently took place. A strong indication of its nonhuman nature and its lack of machine consciousness.

Advertisement

8. Analysis dialog with ChatGPT

ChatGPT apart from its surprising failure to find the correct population of Athens responded reasonably to the other question submitted to the two Turing test champions. Several analysis dialogs took place with ChatGPT in 2023 with varying performance in answering or commenting my numbered input below written in bold as in the case of the dialogs with the Turing test champions in order to stand out from the ChatGPT responses. My comments are written in slated font as above. Due to the space limitations of the present chapter, an indicative abstract only of the much longer analysis dialog held with ChatGPT about artificial consciousness is given below:

Analysis dialogs about other subjects took place also but they are not reported here for the same reason of space limitation and may appear in a future publication.

  1. What can an FSA know about itself?

    ChatGPT:

    An FSA can provide information about its own structure and properties.

  2. Can knowledge of the set of transitions that caused reaching its present state be considered as artificial consciousness?

    ChatGPT:

    No, knowledge of the set of transitions that caused an FSA to reach its present state cannot be considered as artificial consciousness. Artificial consciousness refers to the theoretical concept of a machine or AI system possessing subjective experiences, awareness, self-awareness, or a sense of “being” similar to human consciousness.

    The consciousness requirements posed by ChatGPT possibly derive from texts written by people erroneously expecting all these anthropomorphic properties that are inconceivable for a computer system using a general-purpose computer of current technology. These digital computers rely on very simple hardware, the so-called central processing unit (CPU).

    No such properties could be attributed to a CPU, which is just an arithmetic processor augmented with a testing facility of the results of the calculations that control the execution flow of a program and some input-output facilities. A CPU can easily be described by an FSA.

  3. What is subjective experience of an FSA?

    ChatGPT:

    An FSA (Finite State Automaton) does not possess subjective experience. Subjective experience refers to the first person, personal, and qualitative aspects of conscious awareness such as sensory perceptions, emotions, thoughts, and subjective states of being.

    FSAs are mathematical models used to represent and analyze systems with a finite number of states and transitions. They operate based on predefined rules and deterministic behavior. The transitions in an FSA are driven solely by the input and the specified rules of the system.

    FSAs lack the capacity for subjective experiences because they are abstract models and do not possess consciousness or awareness. They do not have sensory perception, emotions, thoughts, intentions, or subjective states that would be associated with subjective experience.

    Subjective experience is a characteristic of conscious beings, such as humans or animals, that have complex cognitive and neurological systems. It is not applicable to mathematical models or computational systems, such as FSAs, which operate purely on formal rules and symbolic representations.

    With this question, I tried to find out what ChatGPT considers “subjective experience.” The answer that is generated by ChatGPT correctly states the obvious fact that “An FSA (Finite State Automaton) does not possess subjective experience.” And then attempts to show the dubious fact that there are computer systems that cannot be described by FSAs and which presumably may exhibit properties that an FSA does not have. With my next question, I further explore its statement of the existence of these more advanced systems, and a few are mentioned.

  4. All computer systems can be modeled as FSAs

    ChatGPT:

    While it is true that certain aspects of computer systems can be modeled using Finite State Automata (FSAs), it is important to note that not all computer systems can be fully represented or accurately described solely by FSAs.

    FSAs are a simplified mathematical model that captures the behavior of systems with a finite number of states and transitions. They are often used to model systems with discrete, sequential behavior, or processes that can be represented as a sequence of states and state transitions.

    However, computer systems are typically much more complex and involve various components, including processors, memory, input/output devices, operating systems, networks, and software applications. These components exhibit behaviors that extend beyond the scope of simple state transitions.

    To more comprehensively represent computer systems, other formalisms, and models, such as Turing machines, Petri nets, or process calculi, are often employed. These models allow for the representation of more intricate aspects of computation, such as non-determinism, concurrency, parallelism, and dynamic memory allocation.

    While FSAs can provide valuable insights into specific aspects of computer systems, they are just one of many formalisms used in computer science to describe and analyze the behavior of complex systems.

    The appropriate choice of model depends on the specific characteristics and properties of the system being studied or developed.

  5. A computer system that cannot be modeled as an FSA.

    With this question, I ask ChatGPT to give examples of systems that cannot be modeled by an FSA, and I get a list that will be discussed below separately for each case.

    Notice that in the following answer, it is stated that some computer systems “are not easily represented by FSAs” which quite different from “cannot be represented by FSAs.”

    ChatGPT:

    Dynamic memory management that is not easily represented by FSAs. Concurrency and parallelism require mechanisms such as locks, semaphores, or message passing, which extend beyond the sequential nature of FSAs. Input/Output interactions cannot be effectively represented by the static state transitions of an FSA. Complex data structures: Computer systems handle complex data structures such as arrays, linked lists, trees, graphs, or databases. Manipulating and traversing these data structures often requires algorithms and operations that are beyond the scope of simple state transitions in an FSA. Machine learning and AI systems … rely on large-scale data processing, statistical models, and nonlinear transformations that are not easily captured by FSAs.

  6. Models and formalisms that can handle the complexities and nuances inherent in these systems.

    Insisting with this question on the explicit ways non-FSA systems are handled. In what follows each case of such systems is commented on.

    ChatGPT:

    To handle the complexities and nuances inherent in computer systems, various models and formalisms have been developed. Here are a few examples:

    Turing machines: Turing machines are theoretical models of computation that allow for the representation of complex algorithms and computations. They can handle infinite tape storage, non-determinism, and unbounded computation, providing a more expressive representation than FSAs

    Turing machines are imaginary systems characterized by an infinite tape that later ChatGPT admits that such a tape cannot exist. It is, therefore, a contradiction to reject my proposal that it does not apply to an imaginary kind of machine.

    Petri nets: Petri nets are graphical models used to represent concurrent and distributed systems. They can capture the concurrency and synchronization of processes, the flow of resources, and the interactions between components, making them suitable for modeling complex systems with parallelism and synchronization requirements.

    It is very easy to describe any set of parallel processes as an FSA whose state consists of the combination of the states of each constituent process.

    Process calculi: Process calculi, such as the π-calculus or the communicating sequential processes (CSP) model, are formal frameworks for describing concurrent systems. They provide formal notations and rules to represent processes, communication, and synchronization between concurrent components, allowing for precise modeling and analysis of concurrent behavior

    State charts: State charts are an extension of FSAs that incorporate hierarchical states, concurrent states, and events. They provide a more expressive way to model complex systems with multiple levels of abstraction and parallel behavior. State charts are widely used in software engineering and graphical modeling languages such as UML.

    Formal methods: Formal methods encompass a range of mathematical techniques, such as model checking, theorem proving, and abstract interpretation, to analyze and verify properties of computer systems. These methods use formal logic and mathematical reasoning to reason about system behavior, correctness, safety, and security.

    Agent-based models: Agent-based models represent systems as a collection of autonomous and interacting agents. Each agent has its own behavior, state, and communication capabilities. These models are used to simulate and analyze complex systems involving multiple entities and their interactions.

    These models and formalisms, among others, provide more advanced representations for handling the complexities and nuances of computer systems. They offer greater expressive power and analytical capabilities beyond what can be captured by simple FSAs, allowing for a deeper understanding and analysis of complex system behaviors.

  7. However, an infinite tape does not exist.

    ChatGPT:

    You are correct that an infinite tape does not exist in physical reality. In practical implementations of computing systems, physical memory is always finite.

  8. Why then does the theoretical notion of an infinite tape make the modeling with FSAs inadequate?

    ChatGPT:

    The theoretical notion of an infinite tape in the context of Turing machines expands the computational power beyond what can be captured by finite state automata (FSAs). The primary reason why modeling with FSAs becomes inadequate for certain scenarios is due to the finite nature of their memory.

  9. Is an infinite computation implementable?

    ChatGPT:

    No, an infinite computation is not implementable in physical computing systems.

  10. Why then you mention Turing machines as an argument against modeling computer systems via FSAs?

    ChatGPT:

    The mention of Turing machines as an argument against modeling computer systems with FSAs was intended to highlight the limitations of FSAs when it comes to certain aspects of computation.

  11. Turing machines allowing for a broader range of computations beyond the capabilities of FSAs is science fiction.

    ChatGPT:

    The concept of an infinite tape in Turing machines allows for the theoretical handling of unbounded storage and computations. However, in practice, physical computing systems are subject to finite resources and limitations.

  12. What more than a sequence of its states is then required for the artificial consciousness of systems that can be physically realized in practical computing systems?

    ChatGPT:

    Artificial consciousness in systems that can be physically realized in practical computing systems involves more than just a sequence of states. Awareness and perception, self-reflection and self-knowledge, attention and focus, learning and adaptation, and subjective experience.

    GPT erroneously states that systems that can be physically realized in practical computing systems may exhibit all those characteristics to be considered as possessing machine consciousness. It effectively contradicts itself behaving as a textual collage.

  13. How systems that can be physically realized in practical computing systems possess subjective experience?

    ChatGPT:

    Subjective experience is difficult to define or measure objectively. At present, our understanding of consciousness and subjective experience is limited.

  14. If this is so, then why lack of subjective experience does not render knowledge of sequence of states as the only consciousness that an FSA may have?

    ChatGPT:

    Subjective experience, as mentioned earlier, is a more complex and elusive aspect of consciousness that is not currently attributed to FSAs or simple computational systems. Achieving advanced cognitive abilities in computer systems is an ongoing and complex endeavor.

    The position expressed by GPT in the above analysis dialog does not convince me as coherent and not contradictory. Space limitations prohibit further analysis.

Advertisement

9. Newer Turing tests

In [8] the externalist foundations of a truly Total Turing test are studied. The paper begins by examining the original Turing test (TT) and Searle’s antithetical Chinese Room Argument, which is intended to refute the TT. It is argued that Searle’s “internalist” strategy is unable to deflect Dennett’s combined robotics systems reply and the allied Total Turing Test.

In [9] an argument against the feasibility of the TT imitation game as a test for thinking or language understanding is presented. The argument is different from the five objections presented by Turing in his original paper, although it tries to maintain his original intention.

It is therefore called “the sixth argument” or “the argument from context.” It is shown that—although the argument works against the original version of the imitation game—it may suggest a new version of the Turing test, still coherent with the idea of thinking and understanding as symbol manipulation.

In [10] the anti-behaviorist arguments against the validity of the Turing test as a sufficient condition for attributing intelligence based on a memorizing machine, which has recorded within it responses to every possible Turing test interaction of up to a fixed length are considered.

The possibility of memorizing machines is considered and how long a Turing test they can pass based on the age of the universe. It is concluded that the memorizing machine objection to the Turing test as a sufficient condition for attributing intelligence is invalid.

In [11] a similar issue is studied namely the claim that passing the Turing test would not be sufficient to prove that a computer program was intelligent because a trivial program could do it, namely, the “humongous table (HT) program.” HT simply looks up in a table what to say next.

Three ground rules are argued for in [11] namely:

  1. The HT program must be exhaustive and not be based on some vaguely imagined set of tricks.

  2. The HT program must not be created by some set of sentient beings enacting responses to all possible inputs.

  3. In the current state of cognitive science, it must be an open possibility that a computational model of the human mind will be developed.

In [12] the authors raise the question of whether learning is just another computational process, that is. can be implemented as a Turing machine (TM).

They argue that learning or adaption is a process fundamentally different from simple computation. They accept, however, that learning involves processes that can be seen as computations. To illustrate this difference, they compare.

(a) Designing a TM and (b) learning a TM. They show that there is a well-defined sequence of problems, which are not effectively designable but are learnable. Some characteristics of human intelligence are reviewed including interactive nature, learning abilities, imitative tendencies, linguistic ability, and context dependency. They consider the necessity of a considerable period of acculturation (social learning in context) if an artificial intelligence is to pass the Turing test. They conclude three things, namely that: a purely “designed” TM will never pass the Turing test; that there is no such thing as a general intelligence since it necessarily involves learning, and that learning, or adaption should be clearly distinguished from computation.

Advertisement

10. Generation of convincing explanations

The generation of convincing explanations using statistical methods is generally accepted that it is not easy. A variety of explanations may be created by other methods.

For example, by spanning the axes briefly presented below:

  1. From verbose to concise text.

    This axis spans along the conciseness of explanatory texts. This span starts from long and verbose explanations that are usually the easiest to generate automatically when tracing the steps of a deduction used as the basis of explanation. Using summarization, the texts may be reduced in size and made more concise.

  2. From mathematical model to qualitative causal reasoning steps.

    For instance, we have implemented a system using differential equations that describe the interactions of the proteins p53 and Mdm2 that generate descriptions of their output waveforms useful for an explanation of the behavior of the cellular system that they define. If the user is ignorant of mathematics, we may still use a mathematical model invisibly to the user and refer to the parameters of the model described in a language familiar to the user that runs as follows:

    1. The alveolar pressure rising forces air out of the lungs.

    2. The alveolar pressure rise is caused by elastic forces.

    3. Elastic forces include elastic forces caused by surface tension.

    4. Elastic forces caused by surface tension increase as the alveoli become smaller.

    5. As the alveoli become smaller, the concentration of surfactant increases.

    6. The increase of the concentration of surfactant reduces surface tension.

    7. The reduction of the surface tension opposes the collapse of the alveoli.

      The answer from the above text to the question “What process of alveoli causes flow of lungs air?” is ‘become smaller”, and the explanation is:

      “Because surface tension elastic forces are part of elastic forces and alveoli becoming smaller causes an increase of surface tension elastic forces alveoli becoming smaller causes rise of alveolar pressure because alveoli becoming smaller causes an increase in elastic forces and elastic forces cause rise of alveolar pressure alveoli becoming smaller causes flow of lungs air because alveoli becoming smaller causes rise of alveolar pressure and rise of alveolar pressure.”

  3. From purely textual to purely pictorial.

    In the case of this axis, there will exist alternatives with variable degree of reliance on textual or pictorial presentation of the explanations.

  4. From oral to visual.

    In the case of this axis, the explanations will be adapted to users that either prefer to hear or see the relevant information.

  5. From a term definition to documentation by publications.

    If a user does not understand some of the information in the explanations, there will be provision for asking for additional clarifying information ranging from the definition of an unknown term to the user to publications relevant to the unknown concepts or data.

  6. From deterministic to probabilistic models.

    In the case explanations based on a causal model, there will be a choice between deterministic or probabilistic (e.g., stochastic automata) models.

  7. From factoid XQA to deductive XQA.

    Users will be asked to provide examples of questions they may wish to ask the explainable question answering subsystem. These questions may be either factoid or needing deduction or both.

  8. From biochemical to system level of the disease being examined.

    If she/he is not familiar with biochemistry, then the system will restrict the substances involved.

  9. From colloquial to formal language.

    There will be a variation of the language used to express the explanation from everyday colloquial to formal scientific jargon depending on the preferences of the user.

An example concerning the interaction of the two proteins P53 and MDM2 is used for illustration. The mathematical model is derived from biomedical papers. The qualitative causal explanation is derived from the PubMed text base consisting of abstracts of biomedical research papers. The pictorial explanation with natural language comments is derived from a computer simulation of a simplified set of equations that approximate the equations found in biomedical literature.

A rather verbose explanation is generated automatically as follows:

“I found that the entity <p53> is one of the tokens of the chunk <the p53 protein>.

which is the chunk to the left of the verb <regulates> of the sentence <1>.

I found that the chunk to the right of the verb <regulates> of the sentence <1>.

is the chunk <the mdm2 gene > and since its first token is not an entity, I tested the rest of the tokens. The entity <mdm2> is one of the tokens of the chunk <the mdm2 gene>.

Which is the chunk to the right of the verb <regulates> of the sentence <1>.

I found that e entity <mdm2> is one of the tokens of the chunk <the mdm2 oncogene >.

Which is the chunk to the left of the verb <inhibits> of the sentence <3>.

I found that the chunk to the right of the verb <inhibits> of the sentence <3>.

is the chunk <p53 mediated transactivation > and the entity <p53> is one of the tokens of the chunk <p53 mediated transactivation>.

Hence, it follows that <p53> is influenced by <p53>.”

The causal relations recognized above in the text fragment processed by the system form a closed loop. The above explanation may not be convenient for a user facing a crisis like in a defense situation or in a medical emergency department due to its length. A shorter one could be generated as follows:

“I found that the entity <p53> occurs at the left of the verb of sentence <1> and that the mdm2 gene occurs at the right of the verb of <1> and that mdm2 occurs at the left of the verb of <3> and that p53 occurs at the right of the verb of <3>; hence, it follows that <p53> is influenced by <p53>.”

11. Discussion

Although I enjoyed playing with the two 2012 and 2013 Loebner Turing test champions, I will now present my own serious reservations concerning the Turing test. My reservations must be read in view of the 2014 claim that a program “passed” the test be presented in detail below. In my view, some of the shortcomings of the Turing test as organized for the Loebner Prize contest that lessen the value of its results with respect to evaluating artificial intelligence and differentiating from human intelligence and consciousness are:

  1. It is not guaranteed that the judges have the capability of posing questions appropriate for differentiating humans from state-of-the-art computer systems.

  2. The subject matter of the conversations is so wide that very often a human may be unable to answer a question simply because she has no knowledge of the topic that the question refers to.

The questions appropriate for uncovering the nonhuman nature of a computer system should utilize knowledge of at least some of the following human capabilities that usually are manifested during the dialog of a human with a judge:

  1. Memorization of previous stages of a dialog.

  2. Context consciousness.

  3. Logical reasoning.

  4. Humor appreciation.

  5. Irony appreciation.

  6. Metaphor understanding.

  7. Creativity.

These human capabilities inspire artificial intelligence researchers in their efforts to advance the relevant technology. It is expected that programs written by programmers unfamiliar with the state of the art would fail to answer correctly questions that utilize these capabilities. In my two experiments reported above, I checked mainly for operational consciousness in line with our book’s [6] main theme and the results show that this was sufficient for swiftly uncovering the machine nature of the two systems I interacted with. In addition, some lack of coherence can be observed apparently as a result of lack of dialog memorization by these systems.

I performed my experiments during the 2012–2013 period. I was subsequently surprised to learn that in the 2014 Loebner contest with the Turing test a program named “Eugene Goostman” succeeded to “pass” the test. The 2014 Turing test contest was run in London under the auspices of the University of Reading and the British Royal Society.

Cybernetics professor Kevin Warwick of the University of Reading who announced the result of the Test is reported in his Christian Monitor interview of 9th of June 2014 to state the following:

“Some will claim that the test has already been passed. The words Turing test have been applied to similar competitions around the world. However, this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing test does not set the questions or topics prior to the conversations. We are, therefore, proud to declare that Alan Turing’s test was passed for the first time on Saturday.”

The Christian Monitor’s reporter commented:

“Despite Prof. Warwick’s praise, a conversation with Mr. Goostman is decidedly underwhelming. He often appears not to be listening, fails to answer direct questions, and is inappropriately sarcastic and aggressive.

So, can machines think? We posed this question to Goostman and got an uncharacteristically direct answer. ‘Machines can not think” Goostman told the Monitor.”

The abovementioned difference of opinion between the media and scientists became most apparent in the wave of publicity and reactions that followed the announcement. The reactions ranged from naïve statements such as “supercomputer” first to pass Turing test, convinced judges it is alive to sober scientific analyses in line with my reservations presented above that tried to rationalize the event. The writer of the above mentioned “headline” falls victim of the usual confusion between the word “computer,” which is used to refer to the general-purpose stored-program digital electronic device with the phrase “computer system” that refers to a computer equipped and running executing the instructions of a “program”. This set of computer instructions, when executed by the computer, makes it behave as a special-purpose computer. When on September 15th, 2014, I googled the name of the 2014 prize winner program I got 125.000 snippets as the result of my search. This shows the wide publicity of the Turing test of the 2014 Loebner Prize contest. However, I tried in vain several times to interact with Eugene Goostman, but I always found it inactive.

I will now briefly review some of the recent scientific works that deal with the so called “Turing test” that are related to the definition of “artificial intelligence” since it is usually stated that if a program passes the test, it is then considered “intelligent” and hence satisfies the definition of “artificial intelligence.” I must first emphasize that Turing in his seminal 1950 paper did not exactly propose the so called “Turing test”, as organized for the Loebner Prize contest as a test of intelligence. In his own words in that paper, it is stated in page 442:

“I believe that in about 50 years’ time, it will be possible to program computers, …, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after 5 minutes of questioning. The original question “Can machines think?” I believe it to be too meaningless to deserve discussion. … Conjectures are of great importance since they suggest useful lines of research.”

I think that the important fact is the formulation of his conjecture, which we have to judge whether it was verified in 2014, a mere 14 years later than the year specified by his prediction. It should be noted that there are some inexact phrases in Turing’s conjecture that make this verification rather hard, for example, “average interrogator,” “70 percent chance,” and “five minutes of questioning”. The following questions, among others, must be answered before the conjecture can become precise enough for scientific scrutiny:

  1. Over what population of interrogators is supposed that the “average” will be taken?

  2. Will the interrogators be conversant with computer science or not?

  3. How is the “70 percent chance” defined? (during an interrogators day, during the 5 minutes of questioning, or during what other time span?)

  4. The five-minute time span refers to the time taken to pose the questions or to the whole duration of a dialog.

  5. Will the questioning involve all human knowledge or some subset of it and in that case what subset?

Other scientists have also criticized the Turing text from both philosophical and technical points of view. Among other things, the fact is criticized that Turing did not provide a definition of the verb “think.”

This is used in formulating the question “do machines think?”. In view of such criticisms a new test to replace the Turing test called the “Winograd schema challenge” has been established.

The “Winograd schema challenge “has been suggested as a conceptually and practically appealing alternative to the Turing test.

The Winograd schema challenge is an alternative to the Turing test that is supposed to provide a more accurate measure of artificial intelligence. Rather than base the test on the sort of freeform conversation suggested by the Turing test, the Winograd schema challenge poses a set of multiple-choice questions.

An example of a Winograd schema question is the following:

“The trophy would not fit in the brown suitcase because it was too big. What was too big?

Answer 0: the trophy or Answer 1: the suitcase?”

A human who answers these questions correctly typically uses his abilities in spatial reasoning, his knowledge about the typical sizes of objects, and other types of commonsense reasoning, to determine the correct answer. The 2015 Commonsense Reasoning Symposium, to be held at the AAAI Spring Symposium at Stanford from March 2325, 2015, will include a special session for presentations and discussions on progress and issues related to this Winograd schema challenge. Contest details can be found at: http://commonsensereasoning.org/winograd.html.

A program succeeding in the Winograd schema challenge needs reasoning abilities and background knowledge. It involves a coreference resolution task. The complexity of the task is increased by the fact that the Winograd sentences are not constrained by domain or sentence structure.

The winner program that meets the baseline for human performance will receive a grand prize of $25,000. In the case of multiple winners, a panel of judges will base their choice on either further testing or examination of traces of program execution. If no program meets those thresholds, a first prize of $3000 and a second prize of $2000 will be awarded to the two highest-scoring entries. In the case of teams, the prize will be given to the team lead whose responsibility will be to divide the prize among their teammates as appropriate.

In [13] a pronoun resolver system is developed for the confined domain Winograd sentences. A classifier or filter was developed which takes input sentences and decides to accept or reject them based on some criteria.

Furthermore, he has developed four answering modules, which use world knowledge and inference mechanisms to try and resolve the pronoun.

In [14] they examine resolving complex cases of definite pronouns, specifically those for which traditional linguistic constraints on coreference, as well as commonly used resolution heuristics are not useful.

In [15] a test quite different from both the Turing test and the Winograd challenge is studied, namely the “Raven’s Progressive Matrices” (RPM) Test. This test is based upon purely visual representations. A technique is introduced based on the calculation of confidence in an answer and the automatic adjustment of level of resolution if that confidence is insufficient.

12. Conclusions

As shown the 2012 and 2013 winners of the Loebner Prize can easily be disclosed as non-human as they display almost zero consciousness and sense of dialog cohesion. The many analysis dialogs with GPT showed that it can respond usually correctly where the said champions failed to. However, analysis dialogs with more abstract areas, some limitations of GPTs were shown as expected since they use statistics and lack full logical reasoning. In [16, 17, 18, 19, 20] their limitations for intelligent question answering and similar tasks are examined.

References

  1. 1. Turing A. Computing machinery and intelligence. Mind. 1950;LIX:433-460
  2. 2. Green JR et al. Baseball: An automatic question-answerer. Western Joint IRE-AIEE-ACM Computer Conference. 1961:219-224
  3. 3. Kontos J. ARISTA: Knowledge engineering with scientific texts. Information and Software Technology. 1992;34:611-616
  4. 4. Kontos J. Machine consciousness and question answering. American Journal of Data Mining and Knowledge Discovery. 2016;1:7-15
  5. 5. Kontos J. Human-system interface with explanation of actions for autonomous anti-UAV systems. International Journal of Artificial Intelligence & Applications. 2021;12(4):51-58
  6. 6. Kontos J, Kasda P. Artificial Intelligence Professor John Kontos Needles Poly Kasda’s Conscious Eye. Athens, Greece: Published by Notios Anemos; 2021. pp. 1-297
  7. 7. de Gennaro M et al. Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Frontiers in Psychology. 2020;10:3061. DOI: 10.3389/fpsyg.2019.03061
  8. 8. Schweizer Pl. The externalist foundations of a truly total turing test. Minds and Machines. 2012;22:191-212
  9. 9. Penco C. Updating the Turing test. Wittgenstein, turing and symbol manipulation. Open Journal of Philosophy. 2012;2(3):189-194
  10. 10. Shieber SM. There Can Be No Turing Test Passing Memorizing Machines. Philosophers. 2014;14(16):113
  11. 11. McDermott D. On the claim that a table lookup program could pass the turing test. Minds and Machines. 2014;24(2):143-188
  12. 12. Edmonds B, Gerchenson C. Learning, social intelligence and the turing test. Invited talk at turing centenary conference special session on “The turing test and thinking machines”. Published in Lecture Notes in Computer Science. 2012:182-192
  13. 13. Budukh TU. An intelligent coreference resolver for Winograd schema sentences containing resolved semantic entities. In: Thesis Presented in Partial Fulfillment of the Requirements for the Degree Master of Science Approved. Arizona, USA: Arizona State University; 2013
  14. 14. Rahman A, Ng V. Resolving complex cases of definite pronouns: the Winograd schema challenge. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. USA: Association for Computational Linguistics; 2012
  15. 15. McGreggor K, Goel A. Confident reasoning on Raven’s progressive matrices tests. In: Proceedings of the 28th AAAI Conf. on Artificial Intelligence. 2014
  16. 16. Chai A et al. EAGER: Explainable question answering using knowledge graphs. In: 6th Joint Workshop on Graph Data Management Experiences & Systems and Network Data Analytics, Seattle, USA. 2023
  17. 17. Liu H et al. Evaluating the logical reasoning ability of ChatGPT and GPT-4. arXiv:2304.03439v3 [cs.CL] 5. 2023. pp. 1-11
  18. 18. Binza M, Schulza E. Using cognitive psychology to understand GPT-3. Proceedings of the National Academy of Sciences of the United States of America. 2023;120(6):1-10
  19. 19. Yang LH et al. ChatGPT is not enough: Enhancing large language models with knowledge graphs for fact-aware language modeling. arXiv:2306.11489v1 [cs.CL]. 2023. pp. 1-20
  20. 20. Gewirtz D. Can AI detectors save us from ChatGPT? I tried 3 online tools to find out. ZDNET. 2023. Available from: https://www.zdnet.com/

Written By

John Kontos

Submitted: 26 June 2023 Reviewed: 07 July 2023 Published: 02 August 2023