Open access

Emotional System with Consciousness and Behavior Using Dopamine

Written By

Eiji Hayashi

Published: 01 December 2009

DOI: 10.5772/6839

From the Edited Volume

Advances in Human-Robot Interaction

Edited by Vladimir A. Kulyukin

Chapter metrics overview

2,215 Chapter Downloads

View Full Metrics

1. Introduction

Recently, the development of robots other than industrial robots, including home robots, personal robots, medical robots, and amusement robots, has been brisk. These robots, however, have required improvements in their intellectual capabilities and manual skills, as well as further increases in user compatibility (Y. Takahashi, M. Asada (2003)). User compatibility in future robots is important for ease of use, non-fatiguing control, robot friendliness (i.e., sympathetic use), and human-like capricious behavior. However, so far the development of the robots has met with problems with regard to their interactions with humans, especially in relation to motion strategies, communication, etc..

The author has developed a superior automatic piano with which a user can reproduce a desired performance as shown in Figure 1 (E.Hayashi, M.Yamane, T.Ishikawa, K.Yamamoto and H.Mori (1993), E.Hayashi, M.Yamane and H.Mori (1994)). The piano’s hardware and software has been created, and the piano’s action mechanism has been analyzed (E. Hayashi, M. Yamane and H. Mori (2000 ), Eiji Hayashi. (2006)). The automatic piano employs feedback control to follow up an input waveform for a touch actuator which uses the position sensor of an eddy current to strike a key. This fundamental input waveform is used to accurately and automatically reproduce a key touch based on performance information for a piece of classical music. This automatic piano was exhibited in EXPO 2005 AICHI JAPAN, and a demonstration of its abilities was given.

Figure 1.

Automatic Piano : FMT-I.

For music to be reproduced by this automatic piano, the user must edit 1000 or more notes in the score of even a short piece of music (K.Asami, E.Hayashi, M.Yamane, H.Mori, and T.Kitamura (Aug. 1998)(Sept. 1998), Y.Hikisaka, E.Hayashi (2007)). However, since the automatic piano can accurately reproduce music, the user can accurately create an emotionally expressive performance according to an idea without action on their part involving fingers, arms, etc. like a pianist.

Although a user can certainly create a desired expression with the automatic piano, the user find or awake variations in the performance when the user listens repeatedly. The user must continue to make changes in their expressive performance. In other words, human seem to like change. These findings suggest that a robot will also need to have slight variation in behavior based to make interactions with them more pleasing.

Macarthy has indicated that a robot will need to consider and introspect in order to operate in the common sense world and to accomplish tasks humans given to it by humans; as such, it will need to have consciousness and introspective knowledges ( J. McCarthy ( 1996)) and some philosohy ( J. McCarthy (1995)). In addition, however, he indicates that robots should not be equipped with human-like emotions.

In my laboratory, an animal’s adjustment to its environment has been studied in an attempt to emulate its behavior (N. Goto, E. Hayashi (2008), Tadashi Kitamura, Daisuke Nishino (2006)), and attempts have been made to give robots “consciousness” and “emotion” such as that identified in humans and animals to enhance the affinity between humans and robots. These efforts may allow us to meet some of the requirements (E. Hayashi. (2007), (+E. Hayashi. 2008), S. K. Talwar, S. Xu, E. S. Hawley, S. A. Weiss, K. A. Moxon, and J. K. Chapin (1996), J. Y. Donnart and J. A. Meyyer (1996), R.A. Brooks (1991)) for user compatibility.

Consciousness and behavior are related, and a hierarchical structure model that we call Consciousness-based Architecture (CBA) has been constructed in 5 layers. CBA has been synthesized based on a mechanistic expression model of animal consciousness and behavior advocated by the Vietnamese philosopher Tran Duc Thao (Tran Duc Thao, D.J.Herman, D.V.Morano (1986)). CBA introduces an evaluation function for behavior selection, and controls the robot’s behavior. Although the consciousness level is changed in the model, it is difficult for a robot to behave autonomously using only CBA. To achieve such autonomous behavior, it is necessary to continuously produce motion or behavior in the robot, and to autonomously change the consciousness level.

Humans tend to lose interest if a robot continuously gives the same answer or repeats the same motion, but it is not easy for a robot developer to create varied responses and behavioral strategies in a robot. However, if a robot could behave consciously and autonomously, i.e., if a robot had emotional expression, its behaviors appear natural, and the user would not lose interest in it. However, even the human brain does not have a function by which emotions are controlled and managed, nor does a unified system for synthetically administering emotion exist (Joseph LeDoux. (1996)).

In humans and animals, the control or management of emotions depends on the existence of some motivation. The strategy for controlling or managing emotions is carried out as the motivation increases or decreases. Thus, in the present study, a motivation model was been developed to induce conscious, autonomous changes in behavior, and was combined with CBA. CBA was restructured from 5 layers to 4 layers as a retool, and a motivation model was added. Basically, the motivation model is an input to the CBA, and comprises an algorithm with various inputs based on a trace of naturally occurring dopamine in monoamine neurotransmitters (H.Kimura. (2005)).

In this chapter, the expression of emotion by a Conscious Behavior Robot (Conbe-I) that incorporated this motivation model, and the autonomous actions performed to take an object from human’s hand were studied. This conscious behavior robot (Conbe-I) which has six degrees of freedom was developed with the aim of providing the robot with the ability to autonomously adjust to a target position. The Conbe-I is a robotic arm with a hand consisting of three fingers in which a small monocular CCD camera is installed. A landmark object is detected in the image acquired by the CCD camera, enabling it to perform holding and carrying tasks. As an autonomy action experiment, CBA including the motivation model was applied to the Conbe-I, and its behavior was then studied.

Advertisement

2. System structure of the Conbe-I

2.1. Hardware

The actuator of the Conbe-I as shown in Figure 2, is basically a robotic arm that was made with Kihara Iron Works. The Conbe-I is 450 mm long and is divided into 2 parts of an arm and a hand. The arm and the hand have 6 degrees and 1 degree of freedom respectively. The Conbe-I thus has a total of 7 degrees of freedom as shown in Figure 3. The hand has 3 fingers, and a CCD camera is fixed on the hand.

Figure 2.

Appearance of Conscious Behavior Robot ( Conbe-I ).

Figure 3.

Arrangement of degrees of freedom.

The actuator used a Dynamixel DX-117 manufactured by ROBOTIS CO., LTD for each joint and hand. The DX-117 has a decelerator and an angular sensor, and is able to control position and velocity using a target angle, a torque limit, a speed limit, and so on.

Figure 4.

System of the Conbe-I.

The Conbe-I system as shown in Figure 4, consists of a motion control system and CBA with a motivation system in a personal computer, and an actuator. The communication between the personal computer and the actuator uses rs485 for telecommunications. Also, a control driver for a USB in the computer was developed so that the computer could control the actuator while processing the motion control and CBA with the motivation system.

2.2. Motion control system

It is difficult to calculate the angles of all joints from a target position using inverse kinematics since the actuator of the Conbe-I has 6 degrees of freedom of the arm and 1 degree of freedom of the hand. Therefore, the actuator was broken into 4 groups : a shoulder, an elbow, a wrist and a finger, as shown in Figure 5. Each group was then given a role to play in calculating an angle.

Figure 5.

Actuator divided into 4 groups.

Basically, the calculating algorithm is started from θ6 of the wrist to follow up a target object. But when the calculated angle exceeds the movable range, the angles from the elbow to the shoulder are calculated in turn. Additionally, a total of 81 kinds different patterns are calculated based on the wrist position, and the algorithm is structured so that the posture of the Conbe-I can be chosen such that the distance between the target object and the finger-tip becomes the shortest or the longest in the patterns.

As a result, it is possible to move the Conbe-I immediately toward the target object without inverse kinematics when the Conbe-I finds the target object.

2.3. Consciousness – based architecture

Figure 6 provides a diagram of a hierarchical structure model called CBA (consciousness- based architecture) that hierarchically relates consciousness to behavior. In this model, the consciousness field and the behavior field are built separately.

In a dynamic environment, the model decides on a consciousness level that the Conbe-I must strongly consider, and the Conbe-I selects the behavior corresponding to the consciousness level in the consciousness field and performs a behavior in the behavior modules.

This model is characterized by the consciousness level reaching the highest level. Therefore, the Conbe-I can select advanced behavior when the behavior corresponding to the consciousness level is discouraged by the external environment. Additionally, the mechanism of this model freely selects the most comfortable behavior.

Figure 6.

Model of Consciousness-Based Architecture.

Figure 7.

Autonomous behavior system using 4 layers.

Actually, to combine the motivation model with the CBA described in Section 3, the conscious fields and the behavior modules of Figure 6 were restructured from 5 layers to 4 layers with the consciousness up to Level 3. The relation between the consciousness and behavior were given as shown in Figure 7.

We believe that level 4 requires learning, memory and mind based on experiences, but the behavior up to level 3 is instinctive in nature.

Advertisement

3. Motivation model

Even if a robot is pleasing to people when due to its unique movements, people still will lose interest in the behavior, after their initial delight, unless the robot introduces some variations. Although the consciousness and behavior appear not to change significantly when a situation is encountered repeatedly, in actuality they are not the same at all. They continue to change with time, and the consciousness level continues to switch with time.

The CBA is useful for determining the relationship between consciousness and behavior. However, it is not able to continuously change the consciousness and behavior. As a result, the behavior becomes too mechanical.

Figure 8.

Motivation in animal taking action

Figure 9.

Simple flow for practicing a task.

Behaviors similar to that of human beings and animals are needed to actualize user compatibility. First, animal behavior such as that shown in Figure 8 was considered. When an animal, including a human being, takes some action, it can be represented by a behavior such as “Recognition -> Comprehension -> Action.” The behavior will certainly be based on some motivation. Therefore, the emotion-driven behavior of a robot can be directed by a simple flow such as that shown in Figure 9 as the motivation. Thus, we considered this simple flow to be one of the causes of the mechanical action of the robot. The concept of motive was incorporated into the a robot, and a motivation model was constructed to structure an action choice with continuous variation resembling that of human beings and animals.

3.1. Naturally occurring dopamine as an input of motivation

The dopamine in monoamine neurotransmitters was considered in structuring the motivation model. It is thought that dopamine performs functions in the brain, and plays an important roles in behavior, cognition, and motivation. When animals including human beings take various actions, dopamine is secreted in the brain. For that reason, a more natural choice of action would be enabled if the motivation provided by dopamine could be reproduced in a robot.

The motivation model was developed based on an analysis of the effect of a typical-antipsychotic medicine, risperidone, on the release of dopamine in the central nervous system ( H.Kimura(2005)). A graph depicting changes in the quantity of dopamine when the author administered a stimulant to a rat showed that dopamine levels in the brain suddenly increased when an accelerator was taken, and then slowly decreased shortly afterwards. Figure 9 shows a trace of the dopamine’s variation. The change in the quantity of dopamine depends on the quantity of accelerator.

A waveform of naturally occurring dopamine in Figure 10 was divided into “raise” and “fall” portions. The “rise” and “fall” waveforms are used two types of linear differential equation to show a form in the figure. The “raise” and “fall” parts are adapted "a second- order system" and "a first- order system” respectivelly.

Figure 10.

Trace of naturally occurring dopamine in the case of a rat.

When the input x ( t ) is an accelerator of dopamine and the output y ( t ) is a naturally occurring dopamine the rise waveform is given by

y + 2 ς ω n y + ω n 2 = ω n 2 x ( t ) E1

where ς is the damping factor, and ω n is the natural frequency.

For the input x ( t ) and the output y ( t ) , the fall waveform is given by

T y + y ( t ) = x ( t ) E2

where T is the time constant.

The waveform of the naturally occurring dopamine is simulated by calculating the time responses of the respective equations for a step input x ( t ) as shown in Figure 11.

First, the step response of a second- order system is used for the start of the naturally occurring dopamine. After a peak value is reached, a step response of the first-order system is used. Various traces of the naturally occurring dopamine can be expressed by approproately setting the natural frequency ω n , damping factor ς , and time constant T of the variable included in the motivation model.

Figure 11.

Calculating the flow of naturally occurring dopamine.

Equations (1) and (2) are solved using Runge - Kutta methods to continuously calculate the variation of the input waveform.

Advertisement

3.2 Accelerator of dopamine

To change the consciousness and behavior with time, it is necessary to enable their variation. Also, the quantity of naturally occurring dopamine varies to some extent according to the injecting quantity of an accelerator. Therefore, the injecting quantity of the accelerator was determined by the size of a group of pixels in an image obtained from a ccd camera in the hand of the Conbe-I as shown in Figure 12.

Figure 12.

Process of partitioning

To make multiple segments of grouped pixels, the image obtained from the ccd camera was simplified into four color groups of green, blue, flesh, and others; the green, blue and flesh colors were recognized as objects, and the images could be labeled. To distinguish and recognize objects, the shape, size, and center of the blue and flesh- colored segmentations were calculated as shown in Figure 13. Additionally, from this information and the posture of the robot arm, the Conbe-I could roughly recognize the position and distance of the colored object.

Figure 13.

Distinction and recognition.

The objects in Figure 14 were used as means of simulating the naturally occurrence of dopamine in response to the color, shape, and distance of the objects. Hate, pleasure and great pleasure were defined as corresponding to a specific quantity of the naturally occurring dopamine.

Figure 14.

Experimental objects representing inclination and disinclination

3.3. Calculating method of naturally occurring dopamine as an input

Each object is located and recognized with time because objects appear and disappear in/from view, at the same time/with a time lag, and also change shape and size, including a distance effect. Hence, each naturally occurring dopamine response (see Eqs. (1) and (2) ) is calculated separately, and the quantity of naturally occurring dopamine is determined by the total sum of the positive and negative values as shown in Figure 15. In addition, if a new object is located and recognized, the quantity of naturally occurring dopamine is recalculated according to the variation at that point in time.

The output waveform of a motivation comes to be expressed by an input that is obtained based on the occurrence and the accelerator of dopamine described in the above sections.

The output waveform is estimated using a second- order system of linear differential equations that is similar to the naturally occurring dopamine. The input y ( t ) is the naturally occurring dopamine, and the output is M o t i v a t i o n ( t ) .

Figure 15.

Waveforms of naturally occurring dopamine.

The output waveform of M o t i v a t i o n ( t ) is given by

M o t i v a t i o n ( t ) + 2 ς ω n M o t i v a t i o n ( t ) + ω n 2 M o t i v a t i o n ( t ) = ω n 2 y ( t ) E3

where ω n is the natural frequency, and ς is the damping factor.

Equation (3) is solved using the Runge - Kutta methods to calculate continuously the variation of the input waveform.

The result is shown in Figure 16. Although the motivation waveform has a slight lag relative to the input of the naturally occurring dopamine that is the total sum in this figure, the motivation can be obtained via various variations depending on the variation of the occurring dopamine.

A waveform of the motivation can be determined using such a method, incorporating the natural frequency ω n , the damping factor ς , and the time constant T . Also, this method can make it is possible for the Conbe-I to behave consciously and autonomously like human beings and animals.

The autonomous behavior system chooses the consciousness level according to the motivation’s waveform, and controls the actuators of Conbe-I based on the behavior according to the consciousness level in Figure 17. The boundary between the consciousness levels can be determined freely.

Figure 16.

Waveform of motivation.

Figure 17.

Relationship between motivation and consciousness.

Advertisement

4. Experiment

We confirmed that the Conbe-I could accurately recognize a favorite green ball and a hated blue ball. We observed the action of the Conbe-I until it caught a favored green ball. The transition of the motivation is showed in Figure 17. In addition, the actual behavior is shown in Figure 18 (T0-T9), and T0 – T9 in Figure 18 are explained as follows.

The Conbe-I is at rest at T0. Then, when the robot recognizes a green ball, and its motivation increases slightly; the robot begins to run after the ball at T1. The motivation of the robot

Figure 18.

Transition of the motivation.

Figure 19.

Behavior of Conbe-I.

increases more when the green ball does not move. The robot then recognizes the ball from a higher position at T2. The motivation is sufficiently increased, and the robot begins to approach the green ball at T3. However, when the robot finds a hated blue ball at T4, the motivation of the robot suddenly decreases, and it takes its eyes off of the blue ball. Because the robot has been shown the hated blue ball, the robot exhibits behavior showing that it hates the blue ball at T5 and T6. The robot is no longer beging shown the hated blue ball at T7, and subsequently is only shown the favored green ball. Therefore, its motivation increases again, and it runs after the green ball at T8. When the motivation increases, it begins to approach the green ball. The robot comes close enough to the green ball to catch it at T9.

This experiment was tried several times. All results for a task in which the Conbe-I took a favored ball were the same. However, the behaviors and motivations up until it took the ball in all the experiments were obviously different.

Advertisement

5. Conclusions

In this chapter, we have described a Conbe-I that was developed to enable autonomous behavior; the Conbe-I was built with an autonomous behavior system combining motivation with consciousness using dopamine. The autonomous behavior system was able to expert effective control of the Conbe-I, and Conbe-I behaved autonomously and freely.

It was difficult to anticipate the movement of the Conbe-I. The robot never repeated the same movement, even when taking a green ball. This was not a mechanically obvious movement. We therefore believe that the Conbe-I displayed choice of action that resembled that of a human being or an animal.

This autonomous behavior system of the Conbe-I does not actually include an emotional component. However, the Conbe-I gives the appearance of a living thing, and the behavior of the Conbe-I seems to show emotion.

In the future a learning system based on experience will be synthesized so that the Conbe-I will be able to control itself by awareness and introspection.

Advertisement

Acknowledgments

This research was partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Research, 2009.

References

  1. 1. Hayashi E. 2007 Navigation system with a self-drive control for an Autonomous Robot in Indoor Environment, 16th International Symposium on Robot & Human Interactive Communication (RO-MAN 2007), Aug. 2007, Jedu, Korea.
  2. 2. Hayashi E. 2008 Navigation system for an autonomous robot using an ocellus camera in indoor environment, Journal of Artificial Life and Robotics 12 Springer, 346 352
  3. 3. E. Hayashi , M. Yamane and H. Mori. 1994 Development of Moving Coil Actuator for an Automatic Piano, International Journal of Japan Society for Precision Engineering, 28 2 164 169
  4. 4. E. Hayashi, M. Yamane and H. Mori . 2000 Behavior of Piano-Action in a Grand Piano. I: alysis of the Motion of the Hammer Prior to String Contact, Journal of Acoustical Society of America, 105 3534 3544
  5. 5. Hayashi E. Yamane M. Ishikawa T. Yamamoto K. Mori H. 1993 Development of a Piano Player, Proceedings of the 1993 International Computer Music Conference, 426 427 , Sept. 1993, Tokyo, Japan
  6. 6. Eiji Hayashi. 2006 Development of an Automatic Piano that Produce Appropriate-Touch for the Accurate Expression of a Soft Tone-, International Symposium on Advanced Robotics and Machine Intelligence(IROS06), 6 Oct. 2006, Beijin Chaina
  7. 7. H. Kimura. 2005 A trial to anlyze the effect of an atypical antipsychotic medicine, risperidone, on the release of dopamine in the central nervous system., Journal Aichi Medical University Association 33 1 21 27 , 2005.
  8. 8. Mc Carthy J. 1995 Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, IJCAI 95, 2041 2044
  9. 9. Mc Carthy J. 1996 Making robots conscious of their mental states,” in Machine Intelligence, 15 S. Muggleton, Ed. Oxford: Oxford University Press, 3 17 ,
  10. 10. Donnart J. Y. Meyyer J. A. 1996 Learning reactive and planning rules in a motivationally autonomous animat, IEEE Trans. System, Man, and Cybernetics, 26 3 381 395 , 1996.
  11. 11. Joseph LeDoux. 1996 The Emotional Brain: The Mysterious Underpinnings of Emotional Life, New York, Simon & Schuster, 1996.
  12. 12. Asami K. Hayashi E. Yamane M. Mori H. Kitamura T. 1998 Intelligent Edit of Support for an Automatic Piano, Proceedings of the 3rd International Conference on Advanced Mechatronics, 342 347 , KAIST, Aug. 1998, Taejon,Korea
  13. 13. K. Asami, E. Hayashi , M. Yamane , H. Mori and T. Kitamura . 1998 An Intelligent Supporting System for Editing Music Based on Grouping Analysis in Automatic Piano, IEEE Proceedings of RO-MAN’98, 672 677 , Sept. 1998, kagawa,Japan
  14. 14. Goto N. Hayashi E. 2008 Design of Robotic Behavior that imitates animal consciousness, Journal of Artificial Life and Robotics 12 Springer, 97 101
  15. 15. Brooks R. A. 1991 Intelligence without representation, Artificial Intelligence, 47 139 159 , 1991.
  16. 16. Talwar S. K. Xu S. Hawley E. S. Weiss S. A. Moxon K. A. Chapin J. K. 1996 Rat navigation guided by remote control, Nature, 417 37 38 , 2002
  17. 17. Tadashi Kitamura , Daisuke Nishino . 2006 Training of a Learning Agent for Navigation-Inspired by Brain-Machine Interface, IEEE Transaction on system, man, and cybernetics, 36 2 353 365
  18. 18. Tran Duc Thao, D.J. Herman, D.V. Morano . 1986 Phenomenology and Dialectical Materialism Boston Studies in the Philosophy of Science, D Reidel Pub Co, 1986.
  19. 19. Takahashi Y. Asada M. 2003 Multi-layered learning systems for vision-based behavior acquisition of a real mobile robot, Procceding of SICE Annual Conference, Vol.CD-ROM, 2937 2942 , Fukui, Japan
  20. 20. Hikisaka Y. Hayashi E. 2007 Interactive musical editing system to support human errors and offer personal preferences for an automatic piano-Method of searching for similar phrases with DP matching and inferring performance expression, Artificial Life and Robotics(AROB 12th’07), 4 CD-ROM), Jan. 2007, Oita Japan.

Written By

Eiji Hayashi

Published: 01 December 2009