Possible combinations of information from multisources (e.g. sensory organs).
Computation within the human brain is not possible to be emulated 100% in artificial intelligence machines. Human brain has an awesome mechanism when performing computation with new knowledge as the end result. In this chapter, we will show a new approach for emulating the computation that occurs within the human brain to obtain new knowledge as the time passes and makes the knowledge to become newer. Based on this phenomenon, we have built an intelligent system called the Knowledge-Growing System (KGS). This approach is the basis for designing an agent that has ability to think and act rationally like a human, which is called the cognitive agent. Our cognitive modeling approach has resulted in a model of human information processing and a technique called Arwin-Adang-Aciek-Sembiring (A3S). This brain-inspired method opens a new perspective in AI known as cognitive artificial intelligence (CAI). CAI computation can be applied to various applications, namely: (1) knowledge extraction in an integrated information system, (2) probabilistic cognitive robot and coordination among autonomous agent systems, (3) human health detection, and (4) electrical instrument measurement. CAI provides a wide opportunity to yield various technologies and intelligent instrumentations as well as to encourage the development of cognitive science, which then encourages the intelligent systems approach to human intelligence.
- cognitive agent
- cognitive artificial intelligence
- knowledge extraction
- Knowledge‐Growing System
- intelligent system
The term artificial intelligence (AI) was coined over 20 years ago but the endeavor to emulate human intelligence, in this case is how humans think, has been around since the 19th century. Why AI? Humans have intelligence that is not possessed by other living things. With this intelligence, humans can do almost everything, such as walking, talking, creating something worthy, doing business, leading an organization, inventing new technologies, and writing papers. How can humans do these things? There has to be something awesome inside that motivates humans to perform such actions. There will be no action if there is no knowledge, because it is something awesome that produces knowledge. It was then found that this something awesome is the brain with its complex structure as well as its mechanism. Without a brain, humans are just a pile of useless skin and bones. It is like a computer without a processor. So, the brain is the source of all human actions. The brain is the only “engine” that processes all information perceived by humans and the processed information becomes knowledge that creates human intelligence, namely, the ability to do almost everything.
Humans appear to do almost anything by orchestrating all of their apparatuses easily to perform the actions mentioned previously. What we can do is to emulate how the brain works by processing information to obtain knowledge. The brain works when humans think and this process is automatic when humans observe or see phenomena around them, meaning that there is a computation mechanism within the brain that will benefit humankind if it can be emulated in computer‐based systems. There have been many approaches, techniques, and methods invented by many researchers all over the world. Some approaches are based on how the brain's neural networks work, use of stored knowledge or past learning, how humans think or act rationally, and the human infomation processing system. It is understood that computation within the human brain is very complex and cannot be emulated exactly 100%. Therefore, emulating some of its magnificence is something that can benefit humankind.
Based on the information in the previous paragraph, the first approach is called computational AI , the second is called knowledge‐based AI or machine learning , the third according to  is called the agent approach, while the fourth according to  is called the Human Inference System (HIS) approach. Since the introduction of agent terminology as well as its definition by [3, 5], any kind of approach in AI is related to the agent because it is assumed to represent humans in real life. To become intelligent, an agent has to have knowledge and if we look at humans, an agent can obtain knowledge by any number of methods. With humans, knowledge does not come suddenly without process. Therefore, how can we get knowledge of something? First, we have to have information regarding that thing. This information is then sent to the brain to be processed to become knowledge of such thing. Based on this time knowledge, we can estimate the recognition of such thing. The knowledge of such thing can be added to or grown if we get more information from other sources. The more information we get, the more knowledge we have to more accurately estimate the recognition of a thing. Estimating is making a decision about something that might happen in the future. Estimation can only be done through thinking.
In this scheme of a real‐life mechanism, we can see that the process of getting knowledge not only shows how neural networks work (as an example), namely, processing the information and how to store the knowledge in the brain's memory for later use, but also shows how the information, as the first source of knowledge, is gathered and processed within the brain with the ultimate aim of having comprehensive knowledge. This knowledge starts from nothing, grows with the increasing amount and kinds of information as well as the kinds of information source, and is stored and extracted when decisions need to be made. If other researchers focus their research on how the brain's cells work or how to create an intelligent agent with certain characteristics, we focus our research on how humans obtain their knowledge with a mechanism known as information extraction.
Our research was initiated from our curiosity that there has to be a unique mechanism within the brain that gives humans knowledge. It is an abstract thing but it is real. Therefore, we had a hypothesis that the brain does something that we call growing the knowledge. The knowledge will grow with the accretion of information as time passes. This is the essential matter of the system called the Knowledge‐Growing System (KGS), a kind of cognitive agent. This system has been around since its first introduction in 2009 . The aim of this chapter is as follows. Section 1 will give a brief introduction on what KGS is and a glance at AI. The approach perspectives to the development of KGS will be delivered in Section 2. In Section 3 we will show the steps needed to build a KGS. Examples of the utilization of KGS to real‐world problems will be given in Section 4. Section 5 will provide some concluding remarks.
2. The idea and approach perspectives in building KGS
2.1. The idea
The idea was so simple. It emerged from a discussion in our laboratory in the mid‐2000s. We had been observing the phenomenon of why humans are intelligent. Humans have a unique intelligence that differs one to another, and they are intelligent because they have knowledge. The question is how does the knowledge get there, where does it come from, when is it increased, and what mechanism processes it? We did not enquire why knowledge exists, because humans have a brain that differs from other living creatures. Our research is focused only on the above four mentioned questions, especially on the mechanism of the grown of the knowledge within human brain. This is a simple question but at that time we had not found any literature that had studied this matter. Therefore, we decided to conduct this research and have shared some of our research results in previous publications.
2.2. Psychological perspective
Why did we need to view our designed system from a psychological perspective? The primary consideration was that thinking is a cognitive process and it is easy to be viewed from the psychological side. We conducted a number of studies on some models that are considered as human thought models. There are so many models but we selected the ones that represented the mechanism that occurred when humans think and act rationally. Moreover, these models should be sufficiently representative. They are Galileo's four‐step advancement of rational thinking, Piaget's schema, Feynman's model, which perfects Galileo's, Popper's three‐world model, the cognitive psychology model, and Boyd's OODA model. The latest model can be used as part of our new human information processing model. A review of those selected models can be read in . Figure 1 summarizes all approaches from a psychological perspective to building new models of how humans think as the basis for KGS.
2.3. Mathematical perspective
In many cases, humans think probabilistically, especially when faced with situations that need a decision. In most situations, the decision to be made has to consider a plethora of data; on the other hand, there are also many decision alternatives or decision options that can be selected, e.g., plan A, plan B, etc. At the time we did the research, there was no method that was capable of coping with many data/indications, or many alternatives or hypotheses. We made three types of probabilistic thinking for mathematical verification as follows :
2.3.1. Many‐to‐Estimated One (MEO) probability.
From processed data/indications, information will be obtained with a necessary certainty called degree of certainty (DoC), which leads to an inference regarding to the hypothesis being observed.
2.3.2. One‐to‐Many‐to‐Estimated‐One (OMEO) probability.
Given processed single data/indication, information will be obtained regarding the DoCs of all available hypotheses, which in turn leads to a single hypothesis with the largest DoC.
2.3.3. Many‐to‐Many‐to‐Estimated‐One (MMEO) probability.
Given processed multiple indications, information will be obtained regarding the DoCs of all available hypotheses, which in turn leads to a single hypothesis with the largest DoC. This is also called a multihypothesis multidata/indication problem, as illustrated in Figure 1. This is the case faced by most humans in real life.
The famous method that was designed to handle multiple data but with only one hypothesis is Bayes Inference Method (BIM). Later, we found that BIM is categorized as an MEO probability, as hypothesized above. Figure 2 illustrates how MMEO probability works to obtain information with related DoCs.
2.4. Electrical engineering and informatics perspective
Even though AI is approached from various science and technology disciplines, it is mainly studied, researched, developed, applied, and implemented by researchers from the electrical engineering and informatics field. AI can be viewed from various angles. From this perspective, we had our own view, namely, the anatomy view. Based on our study, we found three theories that view AI from its anatomy. First, the theory of agent from Russel & Norvig , second, the theory of topology from Ahmad , and third, the types of processed data and the types of growing knowledge from Munakata  from which we can determine what type of data are grown by the system. Figure 3 shows the three approaches that we use to develop our own KGS.
Another field that also attracts interest is information fusion, a method for integrating or combining information from diverse sources or mutliple sources into single comprehensive information to be used to estimate or predict an entity being observed. The information fusion field emerged from the desire to have better estimation of objects under observation from many sensors rather than just one.
Interestingly, humans already have multiple sensors to help them comprehend the environment. Normally, humans have five sensory organs, namely, eyes, ears, skin, tongue, and nose. Each sensor has its own kind of information. We believe that humans can acquire knowledge because of information delivered by the sensory organs and processed by the brain, which is then stored in a certain location in it. “Processed” in this case is the information from various sensory organs about something in the environment being observed, which is fused to become knowledge. Fusing the information can be called knowledge extraction, meaning extracting knowledge from the fused information.
2.5. Human information processing perspective
Essentially, the information processing model is a theory of human development that uses the computer as a metaphor for explaining thought processes. Similar to computers, humans transform information to solve cognitive problems. Development is viewed in terms of changes in memory‐storage capacities and use of different types of cognitive strategies. On the other hand, information processing can be defined as the acquisition, recording, organization, retrieval, display, and dissemination of information.
We take a look at only three information processing models, namely, Wicken's model, Welford's model, and Whiting's model, the combination of which in our assessment can be adapted to our HIS design. Those three models are evaluated in . Figure 4 shows Whiting's model of information processing, which on thorough investigation is very close to Russel & Norvig's agent concept.
2.6. Social science perspective
In an endeavor to find a method that can complement BIM, from which two methods we can build our own method, we found a decision method taken from social science called Linear Opinion Pool (LOP) . This method is designed to handle single data with multiple hypotheses. The method is important because agent is a social thing that cannot work alone in achieving its goal. In general, there are three types of social decision making as follows.
2.6.1. Linear Opinion Pool
LOP by , where is weighted between and :
2.6.2. Independent Opinion Pool
2.6.3. Independent Likelihood Pool
Independent Likelihood Pool (ILP) by :
The ILP method is the proper means for information combination in sensory cases because a priori information tends to come from the same source. However, the most used method in consensus theory is the LOP method, and in AI, LOP is the general method for combining probabilities from diverse agents to produce one social probability . Later, we also found that LOP is categorized as OMEO probability, as hypothesized in Section 2.3.
3. Building the KGS
There is no exact definition of knowledge. For centuries there have been endeavors to understand what knowledge is. Essentially, knowledge is about knowing of something and knowing needs a means as well as a mechanism, such as thinking, perceiving, experience, learning, or interaction. An agent can do one of these means or more than one simultaneously, such as thinking while observing something. This is possible because humans have the ability to multitask. According to , there are four ways of knowing, namely:
Some or all knowledge is innate.
Some or all knowledge is observational.
Some or all knowledge is nonobservational, attained by thought alone.
Some or all knowledge is partly observational and partly not—attained at once by observing and thinking.
Back to our idea of building a KGS, that is, a cognitive agent that has the capability of growing knowledge as its intelligent primary characteristic. KGS is also a cognitive agent, which is designed to be able to think and act rationally like humans. Therefore, our design steps are as follows:
Determine the type of obtaining the knowledge.
Design the cycle for obtaining the knowledge and the model of information processing.
Design the cognitive agent.
Design the mechanism of obtaining the knowledge.
3.1. Determine the type of obtaining the knowledge
In the theory of knowledge generation, researchers in the field of psychology had different perspectives on knowledge generation in the human brain, but their idea is similar to what was later called constructivism. In its very simple definition, constructivism is a theory of learning or theory of knowledge (epistemology), which states that humans generate knowledge and meaning from experiences and interactions. Fundamentally, constructivists believe that humans “construct” their own knowledge and understanding through ideas, content, events, etc. that they come in contact with. This review can be read in .
From the perspective of agent, knowledge generation can be viewed from concept and method and terminology. From the first view, there are three types of methods, namely: (1) based on experience or past data , (2) based on interaction with the environment or cognitive computation , and (3) based on self‐organizing . From the latter view, at the time when we did the research, the only research that used this term was done for industrial application. It examined how knowledge grows old in the human brain and used the analogy to build a reconfiguration system for automobile application. It concentrated on the optimization of knowledge retrieval rather than emulating the way the human brain grows the knowledge over time .
There is an intersection of approach between past and current researchers that knowledge generation can be carried out by means of interaction with the environment. Therefore, we adopt the concept of constructivism in terms of interaction as the basis for growing knowledge in the human brain. This is the foundation of our new terminology, namely, knowledge growing, as a mechanism to grow knowledge in KGS.
3.2. Design the cycle for obtaining the knowledge and the model of information processing
Based on our study of various human information processing models, we conclude that the knowledge‐growing cycle and human information processing models cannot be separated from each other. The cycle represents the mechanism of obtaining knowledge and uses it as the basis for making decisions or actions, while the human information processing models show how knowledge is grown within the brain.
From the models that have been studied, we have introduced a new human thought cycle called Sense‐Inference and Decision Formulation‐Decide and Act (SIDA) , which is part of the new human information processing model called HIS , as depicted in Figure 5 and Figure 6, respectively. Figure 6 became the basis for developing a mathematical model for knowledge growing.
3.3. Design the cognitive agent
In designing the agent, the essential points are that it has to have cognitive capability, namely, knowledge growing by means of information fusion. Our cognitive agent is depicted in Figure 7, an updated version from .
3.4. Design the mechanism of obtaining the knowledge
This is the most important section of the chapter. The hardest way was to have the mathematical model of knowldege growing, because without it, it would not be possible to examine the mechanism depicted in Figure 6. However, first, we have to develop the formula for knowledge growing by taking advantage of the BIM and LOP methods, and obtain a mathematic formula that is designed to handle multiple data and multiple hypotheses called Maximum Score of the Total Sum of Joint Probabilities (MSJP) , as presented in Eq. (4). It is the representation of MMEO probability as hypothesized in Section 2.3. This method was then refined to become the A3S method  as the method for KGS knowledge growing as given in Eq. (5). To make it easy to remember, we replace the notation with , where is created to simplify long notation:
where is called New Knowledge Probability Distribution (NKPD).
Humans normally have five sensory organs that dynamically sense any kind of phenomenon that occurs in their environment. This phenomenon can be in physical or nonphysical form. The sensed phenomenon is then perceived by a sensory organ such as the ears for subsequent information processing. To gain comprehensive information, the information delivered by an individual sensory organ is then fused by the brain. In general, the amount of fused information can be obtained by using Eq. (6):
where is the amount of fused information and is the number of sensors. In the case of humans with there will be combinations of fused information or clusters (Figure 6).
The number of combinations is obtained under the assumption that there is no information fusion for information delivered from one‐pair sensory organs such as eyes and ears. In general, the possible combination of information from a multisource that may be fused in KGS is shown in Table 1. Each combination will have its own information inferencing (Figure 7) and the fused information distribution is listed in the last row. Table 2 presents the mathematical model of Table 1. As we can see in Table 1, the inputs to KGS are very simple and they have to be binary, after a certain conversion mechanism, of course, from their original inputs.
|Information source||The possible combination of information from information sources|
|Information source||The possible combination of information from information sources|
If we assume in general that is the number of information multisources or multisensors, then is a collection of hypotheses or multihypothesis of an environmental phenomenon regarding the information supplied by the multisensor. At the end of the computation, is also functioned as the number of fused information from a multisensor that explains a collection of individual phenomena based on the multihypothesis.
Notation represents that the probability hypothesis j is true given information sensed and perceived by sensor . The DoC represented by defines that hypothesis is selected based on the fusion of the information delivered from the multisensor, that is, from to , where . The subscript “1” in the notation means that the computation results are DoC at time 1 or the first observation time. This number is required if we want to have the next observation computed. Information fusion to obtain a collection of DoC is given in Eq. (7):
where, for simplicty, replaces , replaces and and is NKPD. This is a collection of information that can be further extracted to obtain new knowledge. The new knowledge at this point can be obtained by applying Eq. (12):
where . is the inference of which later becomes new knowledge of KGS. The growing of knowledge over time is obtained by replacing the first column of Table 2 with a time parameter. The advancement of the A3S method that already involves the time parameter gives rise to a new method called Observation Multi‐time A3S (OMA3S), and knowledge distribution resulting from the application of this method is called New Knowledge Probability Distribution over Time (NKPDT) :
where The certainty of the phenomena that KGS observes in the environment is measured by using the DoC formula given in Eq. (11) for single observation time using A3S and Eq. (12) for multiple observation time using OMA3S (see the details in ).
where , and is the knowledge in terms of probability value of the j best hypothesis at single observation time and multiple observation time, respectively.
3.5. Application examples
3.5.1. CAI for intelligence decision making
In this section we share our work on the application of CAI for intelligence decision making. “Intelligence” means “The product resulting from the collection, processing, integration, evaluation, analysis, and interpretation of available information concerning foreign nations, hostile or potentially hostile forces or elements, or areas of actual or potential operations” . The process of obtaining this intelligence is carried out intelligently by KGS, a method of CAI.
Our work was supported by a national body that has responsibility for security at sea. It needs an automatic mechanism to extract knowledge from intelligence data delivered from direct observation by field personnel. By having such a mechanism, it is hoped that the abundance of data from the field can be processed quickly while maintaining their accuracy. The extracted knowledge obtained from the processed data will be used as the basis for making decisions and actions for law enforcement at sea. Based on the requirements, we customized our method so that it is suitable for the problems at hand. First, we have to build an intelligence tree based on an intelligence estimation table as well as indications of incidents related to the security and safety of certain aspects at sea. Second, the algorithm to represent the work of the required mechanism has also to be built. This algorithm becomes the basis for coding the mechanism, which is the core of the whole system. In general, the intelligence tree for security and safety at sea is depicted in Figure 8.
18.104.22.168. Determining the hypotheses and indications
The two important points that have to be determined are the number of hypotheses and indications. Determining the hypotheses is much easier because in general there are only eight possible incidents regarding security and safety at sea. For security at sea, there are seven hypotheses, namely, piracy, illegal mining, illegal fuel, illegal fishing, trafficking, drugs, and smuggling. However, there is only one hypothesis for safety at sea, namely, safety from crime at sea. Each hypothesis has its own indications but some of them overlap one another. Therefore, we have to compare all indications. Initially, we have 54 indications altogether. After comparing and confirming them to the user, the number of indications narrows to 42. These indications are then placed in a table along with all hypotheses. One of the tables that contain the hypotheses and indications is shown in Figure 9. In this figure we show all hypotheses that are coded H to H, while for indications we show only 12 of them, which are coded ID to ID. This table represents information within the intelligence tree for security and safety at sea as depicted in Figure 8.
22.214.171.124. Operating the intelligence decision‐making system
In testing the system, we have created test inputs. The inputs to the system are in binary numbers. There are reasons why we select binary numbers (0 and 1, or “yes” and “no”). The first is that a decision cannot be blurred or cause ambiguity to the decision maker. The second is that the binary decision is a type of critical decision making . This scheme is relevant to the situation faced by the user's field personnel because the information regarding the observed phenomenon has to be confirmed. If it is blurred or ambiguous, then the knowledge obtained by the system may not represent the real situation, meaning that the possibility of making an incorrect decision may be higher. In this test, we will use all hypotheses and only some indications. This is to observe that the system runs well and the knowledge obtained is relevant to human thought and decision when it is received.
For this test, we use 10 indications with eight hypotheses. To operate the system, we first run the application software with KGS already within it. The operation will be depicted chronologically in the following figures. Figure 10 shows the empty field that is to be filled with inputs. Figure 11 shows the field that is already filled with the combination of inputs. Field filling is based on the phenomenon observed by the personnel. Each observation will be compared with the available indications as well as the available hypotheses. A certain observation to the phenomenon may result an indication or indications which may be possessed by one or more incidents that are represented by hypotheses. The observed indications are then filled in the appropriated rows. This scheme that makes some rows are filled with “1.” The next step is to execute the system by clicking the “Submit” button; the results of the computation are depicted in Figure 12 and they can be saved as depicted in Figure 13.
The results of the computation can be easily extracted and then converted to a graphic to make them easy to understand, as depicted in Figure 14. From the results, we can extract the knowledge regarding the observed phenomenon. The computation results represent NKPD of the system and its DoC can be obtained by applying Eq. (12) as follows:
The value of because at the beginning the system knows nothing about the phenomenon. After one observation, the system has knowledge regarding the phenomenon, with a DoC as much as 0.20 out of 1, where this value is in the H column. This means that the system has only little certainty regarding the “Trafficking” situation. A single observation may be used as the basis for making a decision, but humans in general carry out more observations to ensure that the observed phenomenon is the same as the phenomenon observed in time 1 (the first observation). Based on that consideration, we put two more observation time inputs into the system and the results of computation are depicted graphically in Figures 15 and 16. After three observation times, we have a table with the computation results of each observation time (Table 3).
|Category||Piracy||Illegal mining||Illegal fuel||Illegal fishing||Trafficking||Drugs||Smuggling||Safe from crime at sea|
As we can see in Table 3, from three observation times the system has knowledge that the phenomenon it observes is H. In each observation time, the system's DoC always goes to H. To ensure that H is the correct phenomenon, the system has to obtain the ultimate knowledge from all observation times. The resultant knowledge is represented with NKPDT where hypotheses with the highest DoC will become the ultimate knowledge of the system that determines the estimated phenomenon being observed. The application of Eqs. (9) and (10) will obtain NKPDT and DoC as follows. The DoC is depicted in Figure 17.
For this example, the system obtains knowledge that the phenomenon being observed is not just H but it is also H. In one view it is a good result because trafficking has a noticeable relation with drugs. According to research done by , there is a growing convergence of drug and human trade results from drug organizations entering into the businesses of smuggling and trafficking. Based on this knowledge, the decision maker can make a decision and action to cope with such a situation. The decision maker can set up a proper strategy, formulate the number of personnel who will carry out such a mission, and prepare a budget to support the mission. If the decision maker needs more detail of the situation, more observation time data can be added to the system. The more observation time, the more precise the knowledge obtained by the system.
3.5.2. CAI for inferring the electrocardiography (ECG) data of heart block and arrhythmia
This section is taken from the work of  where KGS is utilized to obtain knowledge regarding heart block and arrhythmia.
126.96.36.199. Theory of ECG
ECG is a medical instrument used to read the condition of the human heart. ECG works by receiving data from heart electrical signals and displaying them in graphical form known as a PQRST graph. Electrodes are placed at several points on the surface of the body to receive heart electrical signals. Usually, 12 leads of electrodes are used to receive heart electrical signals. The condition of the heart can be diagnosed by observing the shape, amplitudes, and wave periods of a PQRST wave. A normal ECG graph is shown in Figure 18. The vertical axis shows the amplitude of the PQRST (mV), and the horizontal axis defines the time parameter (s). The ECG graph is recorded on moving paper, which traces the heart activity, detected by the electrodes. Figure 19 shows an example of normal ECG graph recording. A physician reads the graph result from the ECG machines by comparing the wave pattern from each of the 12 leads. Every abnormal pattern may signal one or more abnormalities or unhealthy conditions of the heart. Because in this section observation is limited to detecting heart block and arrythmia, information according to the hypotheses of heart conditions and the responses of the ECG graph when observing those conditions is gathered in Table 4.
|1.||Normal||R wave in lead I + & R wave in lead avF +|
|2.||Normal||R: lead V6 +|
|3.||Normal||R: lead V1 –|
|5.||Left axis deviation (LAD)||R: lead I + & R: lead avF –|
|6.||Right axis deviation (RAD)||R: lead I – & R: lead avF +|
|7.||Left bundle branch block (LBBB) or right bundle branch block (RBBB)||Lead V1, interval QRS > 20 ms|
|8.||RBBB||Lead V6, S amplitude < 0.1 mV & S interval > 80 ms|
|14.||RBBB||In lead V3, merging together of the S wave and T wave|
|15.||LBBB||V5, Q amplitude < 0.1 mV|
|26.||LBBB||ST elevation in lead V4|
|27.||Arrhythmia||In one lead, varying interval of RR|
|32.||Atrial tachycardia||P wave morphology is abnormal|
|37.||Atrial tachycardia||Isoelectric baseline|
|38.||Ventricular tachycardia||P and QRS complexes at different rates|
|49.||Ventricular tachycardia||RSR complexes with taller R|
|Time (T)||Indica‐tion (ID)||Hypotheses (H)|
188.8.131.52. Inferring the ECG data of heart block and arrhythmia
Calculation of the QRS angle can be used to detect a block in the bundle branch . Otherwise, arrhythmia is an abnormal condition of heart beat rhythm. These two conditions can be used to analyze information regarding any possibility of heart disease. To implement the KGS computation for diagnosing heart conditions, first, information must be collected regarding all hypotheses and indications related to the observation of ECG resulting from heart block and arrhythmia. In this section, eight heart conditions are drawn to these two conditions as hypothesis information. All hypotheses and indications for inferring the ECG data can be seen in Figures 20 and 21.
The system will perform the computation by looking at the correlations between indications and appropriate hypotheses. Further details can be seen in Figure 22. Logic “1” denotes the correlation between a hypothesis and an indication. As shown in Figure 22, the relationship between H and ID is represented by logic 1. This means that at the time of examination the ECG results show the graph as described in ID1 (Table 4), number 1, which means this information relates to a normal heart condition. For a better understanding of the computation process in the system, an example using five indications as input is used. In this example the observation was done five times. The results of the computation can be seen in Table 5.
According to Table 5, computation of the system can be explained as follows. After collecting the information from the correlation among indications and hypotheses, the value of P(ω) is counted, creating a value of DoC for every hypothesis in one observation time. Then, after all DoCs for each hypothesis are collected, the system will compute the value of DoC for the whole observation time using the OMA3S equation. For this example, five observation times are applied. The system gives the results as shown in Figure 23. By looking at each correlation between the hypothesis and the indication of each observation time, using the OMA3S formula, the results of the observations can be summarized as depicted in Figure 24, which shows the results from all observation times, according to the data sample from Table 4. From the graphic, it can be seen from the observation results that the heart condition that has been observed has a 74% normal condition, which tends to have 26% of the left axis deviation (LAD) condition. Another example of observation results can be seen in Figures 25 and 26 for different cases of heart condition. Figure 25 shows the condition of left bundle branch block (LBBB), which tends to have an LAD condition, and Figure 26 shows the condition for arrhythmia, which tends to have an atrial tachycardia condition.
3.5.3. Other CAI applications for humankind
KGS as the main engine of CAI has been applied to some real‐life problems. Its applications range from decision making to biomedical engineering such as military decision making in [21, 22] and , power plant energy management in , gene behavior estimation in , dissolved gas analysis for interpreting transformer condition in [34, 35], device encryption method in , and intelligence analysis and estimation in . We have been making advances by transferring the KGS algorithm to hardware to develop a cognitive processor [38, 39, 40]. We have seen many opportunities that would suit this kind of processor for humankind such as for intelligent unmanned vehicles.
4. Concluding remarks
Applying AI to a simple idea was not an easy task until now. Our simple idea of emulating how the human brain grows knowledge or thinking over time, which makes humans intelligent, has been successfully achieved in the AI field. These novelties cover the cognitive agent KGS, which can think and act rationally like humans, the A3S and OMA3S formula for growing knowledge, and new HIS and SIDA cycles as part of HIS. KGS is one of the methods in CAI we have developed since 2006, and its computation method matures as time passes and as the number of fields where it is applied grows wider. Our research also proved that humans can become intelligent by interacting with the environment as stated in constructivism theory. Also important was that our research showed that to have a comprehensive model of human intelligence, it must be approached from diverse disciplines.
In this chapter we have also delivered two examples of CAI application for humankind. One example is related to obtaining knowledge for intelligence decision making, and the other is related to obtaining knowledge for e‐health application. Our examples show that KGS is able to grow its knowledge from nothing to a certain extent depending on the number of observation times. The more information is processed, the more knowledge can be obtained, and the more intelligent it becomes. Equally important is that the outcome of this knowledge is more precise decisions and actions that can be taken by the decision maker. However, the most important is our research has opened a new perspective in AI. We are proud to call this new perspective as Cognitive Artificial Intelligence, abbreviated as CAI.