Open access

Attentive Computing

Written By

Ahmet Cengizhan Dirican and Mehmet Gokturk

Published: 01 December 2009

DOI: 10.5772/7744

From the Edited Volume

Human-Computer Interaction

Edited by Inaki Maurtua

Chapter metrics overview

1,691 Chapter Downloads

View Full Metrics

1. Introduction

Although the purpose of technological progress of humanity is open for debate, it can be concluded that the outcome of advancements should enhance the quality of life in general. This enhancement does not only depend on new functionalities provided by advancements but also depends on strategies to present these functionalities. Presentation strategies are particularly important for technologies that are used ubiquitously found in many modern digital computing systems.

One can observe that the functionality side of the enhancements provided by computing systems, parallel to the developments in electronics and communication technologies, has been under realization with a great success for years to come. The presentation side, on the other hand, although significant developments had been realized until the very beginnings of nineties, could not show the same advancement.

People live in a modern digital life surrounded by various digital computing systems. Such a life, as well as its numerous advantages, brings extra problems to cope with as discussed in Section 3.1. Due to their inattentive behavior described in Section 3.2, computing systems at hand both cause new problems and even worsen existing ones. This can be attributed to the over-visibility of current digital computing systems. They are too much visible to efficiently exploit. Yet, their visibility has been in constant increase parallel to their quantity and services Weiser, 1991.

This might be attributed to the fact that traditional design approaches do not meet with the cravings of contemporary information hungry devices anymore. Managing the information conveyed through such systems has been gradually becoming heavier. The ubiquity of computing systems and the multiparty interaction leads to a breakdown in the existing channels of interaction Vertegaal et al., 2006. Computing systems bombard their users with immediate attention requests by sending interruptions. The well known execution-evaluation cycle framework could not work properly due to the attention switching caused by these interruptions. Besides, computing systems fall behind to support their users compared to the increasing mental load of users. They still provide explicit and very limited interaction channels and methods to user for managing both the system and his or her information environment.

Since early 1980s, researchers have been emphasizing on above problems and seeking for new interaction methods and channels. Among these, Bolt’s "Put-That-There: Voice and Gesture at the Graphics Interface” Bolt, 1980, Jacobs’ “What You Look is What You Get: Eye Movement-Based Interaction Techniques” Jacob, 1990, Weiser’s “Ubiquitous Computing” Weiser, 1991, Nielsen’s “Non-Command User Interfaces” Nielsen, 1993 can be given as pioneering examples of early studies that shaped 21st Century computing paradigms. Consequently, several new computing paradigms emerged in the research scene such as Affective Computing, Context-aware Computing, Perceptual Computing and Attentive Computing, constituting the subject of this chapter.

The name of every suggested paradigm conveys details to readers about the kind of principal approach utilized. Affective Computing, for example, intend to improve the interaction between human and computing systems through sensing human emotions Pickard, 1998. Context-aware computing is based on the situational awareness and contextual information of what and where the user task is located, what the user knows, and what the system capabilities are Selker & Burlesson, 2000. Attentive Computing, on the other hand, aims to regulate the interaction through the observation of human attention.

Human attention is a crucial but limited and fragile cognitive resource that must be carefully exploited and even augmented if possible as described in detail in Section 2.3. The fundamental purpose of AC is to preserve and support human cognitive resources. By sensing humans’ past, present and future attention, AC aims to specify user’s ongoing tasks, interests, goals and priorities. Thus, it will be possible to provide the user with relevant information and necessary support. Consequently, AC aims to ensure a more natural, unobtrusive and efficient human computer interaction. This naturalness, unobtrusiveness and efficiency may well be turned into invisibility. This is because of that most of the “visible” things that are noticed by or attract attention of humans are things that are unnatural, obtrusive and hard-to-use.

The invisibility of computing system is also considered differently in the literature. Don Norman, for example, attribute the invisibility of computing system that the computing devices to become seriously task-specific and thus, the interfaces of the systems blend into the background and unnoticed by users Borriello, 2000. This seems to be an important challenge for the future of computing systems. Nevertheless, even if we design task specific devices, the invisibility of their interface of these devices will probably depend on the behavior and user sensing and information presenting capabilities. From this point of view, AC is highly promising for both current multi-purpose computing systems and even next generation task specific devices.

The rest of the chapter is organized as follows. In section 2, we emphasize on a number of subjects for a good understanding of AC. In section 3, it’s investigated the quest for AC. All aspects of this quest and the need for AC paradigm are discussed in detail. In Section 4, the attentiveness in the literature, the definition and the properties of AC and our PRO-D framework model for the implementation of Attentive Computing Systems (ACS) are highlighted. In Section 5, important examples of Attentive Computing Systems are given. The conclusions and future directions are in Section 6.

Advertisement

2. Background

In this section, we aim to provide the necessary background for a good understanding of Attentive Computing (AC). This requires detailed study on the following issues.

  1. Ubiquity of Computing Systems

  2. Multiparty Interaction

  3. Humans and Attention

  4. Multiparty Interruptions and the Broken Execution Evaluation Cycle

Figure 1.

Multiparty Interaction: A user (gray head) is surrounded by multiple digital computing systems.

2.1. Ubiquity of Computing Systems

At his first proposal “ubiquitous computing”, Weiser envisioned the digital computing systems to be ubiquitous and become invisible like electric motors by interweaving themselves into the fabric of every day life until they are indistinguishable from it. They would be everywhere but people would not be aware of their existence Weiser, 1991.

Today, one can observe that the predictions of Weiser about the ubiquity of computing technologies have substantially being implemented. Digital computing systems have become a part of everyday life. Although they still preserve their classic design and they are not as invisible as Weiser predicted, many are used ubiquitously in various kinds of devices and systems such that from a child’s toy to the control system of a nuclear power plant.

2.2. Multiparty Interaction

People live a life surrounded by digital computing devices such as computers, PDAs, mobile phones, Blackberries, iPods or even infamous microwave ovens (Fig. 1. Multiparty Interaction). If there would be a Moore’s Law equivalent for the number of various digital computing devices, it may well state that the numbers of types of computing devices per user would double every year. This would mean that the number of surrounding computing systems of a person has been gradually increasing for the last decades. Yet, in parallel to this increase, the interaction between humans and digital computing systems has also changed over time.

In earlier days of computing, computer users were sitting in a many-to-one interaction model against a mainframe computer through dummy terminals. With the introduction of personal computers, every user sooner or later possessed one or more standalone computers which are unarguably more powerful and capable than the early ones. This progress enabled one-to-one interaction model to come on scene. With the recent rise of mobile computing, and rapid decline in device costs, this model evolved as a one-to-many interaction. Today, a typical user attempts to use more than one device at a time: at least a desktop PC and possibly one or more cell phones loaded with running individual applications simultaneously.

Nevertheless, one-to-many interaction can not explain the whole picture efficiently. More complex situations can be observed with digital computing systems. Specifically in urban life, people share the same space most of the time such as offices, meeting rooms and public transportation vehicles. They are unable to avoid from being effected from other devices in the shared environment. The shared environments may belong to other people or may be embedded in within the environment such as air conditioners and coffee machines. As a result, people have developed a many-to-many or a multiparty interaction with surrounding digital systems Vertegaal et al., 2006.

2.3. Humans and Attention

Shneiderman states that “Harnessing the computer’s power is a task for designers who understand the technology and are sensitive to human capabilities and needs” Shneiderman, 1998. Therefore, a good understanding of human capabilities, their limits or capacities, the underlying mechanisms and the cost of abusing and the gain supporting them are crucial for designing usable computing systems.

Humans are not machines. They get tired and forget easily. They are slow and not good at repetitive tasks like mathematical calculations and sorting, especially when tasks are too many and sequential. However, humans are distinct from computers and even other living things by their intelligence and their cognitive or mental capabilities like thinking, learning, problem solving, and decision making and remembering.

These crucial capabilities are basis on the cognitive mechanisms of humans to acquire, store and process the information. The information or stimulus coming from outside world through sensory organs exceeds what humans are actually capable of processing most of the time. Fortunately, humans have cognitive mechanisms called attentional mechanisms that enable them to filter and select the incoming sensory information Roda & Thomas, 2006. These mechanisms or human attention specifies the selection of relevant information and the filtering of irrelevant information from incoming stimulus Roda & Thomas, 2006. Later, this selected and filtered information carried in human working memory and become usable to realize the cognitive or mental capabilities. In other words, these capabilities depend on the health of human attentional mechanisms.

The underlying mechanisms of human attention have controversial issues. There are many questions ought to be answered scientifically such that “Are filtering and the selection cognition-driven or input driven?” or “Are they realized during perception or cognition?”, “How distracters effects the attention?,”, “Do humans can attend many things at a time?” etc. There are many models and theories proposed for these and more questions in the literature. Discussion of the all these theories and models is beyond this chapter. Interested reader may find a comprehensive introduction of these theories and models from an Attentive Computing perspective in Roda & Thomas, 2006. In this chapter, readers are provided with an introductory level of theories and findings about the subject.

Treisman’s “Feature Integration Theory” states that the filtering and selection are guided by both input and cognition Roda & Thomas, 2006. This means that the information is filtered before entering into brain during perception in a preattentive stage, and after entered the brain, during cognition in attentive stage. This is due to the fact that eye tracking alone is insufficient to indicate cognitive interest. When humans look at something, it’s a good indication of physical observation but it is not clear whether the information has been mentally processed or not Vertegaal, 2002.

Humans can only absorb and attend only one thing at a time. As user pay attention to something, any other stimulus that tries to use the single attentional channel may cause the user’s attention to be distracted. According to “Modern theory of attention”, irrelevant information will be excluded from processing only if the prioritized relevant processing exhausts all the available capacity Roda & Thomas, 2006. Otherwise distracters will be processed. In this account the locus of selection depend on the load of incoming stimulus.

Although human attention is considered as an unlimited cognitive resource, it shows indeed a limited performance in reality. It is open to be easily broken by distracters Vertegaal, 2002 and competing attention seekers. As a result, the increasing interest to the user attention is considered as a crucial usability problem Vertegaal, 2003. Human attention or attentional capacities should not be wasted and even be supported if possible Vertegaal, 2006.

Figure 2.

Multiparty Interruptions: While a user (big gray head) tries to read a sales report other devices and software try to attract to user’s attention by sending interruptions to present their information.

2.4. Multiparty Interruptions and Broken Execution-Evaluation Cycle

An interruption is an external stimulus that tries to attract the user’s attention. It is sensed by users according to its incoming channel (visual, sight, touch, sound, smell), volume (weak or strong), relevance to the ongoing task and the user’s concentration to the ongoing task Roda & Thomas, 2006.

One can observe the interruptions come from multiple computing systems around a user in (Fig. 2. Multiparty interruptions). In this figure, it’s seen a user (gray head) that is in a multiparty interaction with the computing systems around him. While the user tries to read a sales report document from a word processor, other software and computing devices try to roughly interfere and attract to person’s attention by sending interruptions to convey their information. This cause the user to lost his focus of attention and the existing and primary interaction to be broken down. Thus, it will be inevitable a lost of motivation and performance for the person.

This is because of that humans are weak against interruption because of their cognitive limitations as discussed in Section 2.3 McFarlane, 1999. Research suggests that a 15 seconds interruption may cause the user to drop some items from its short term to-do list Gibbs, 2005. Bailey et al. indicates that a computer initiated interruption causes a significant increase in completion time for variety of web based tasks Bailey, 2004. The study proposes that there is a positive relation between the task completion time and the memory load at the time interruption.

An interruption causes the alternation of the attentional channel of the user. This cause the well known execution-evaluation cycle to be broken. The framework of execution evaluation cycle has been suggested by Donald Norman and later revised by Dix et al, 2004. It can be depicted as shown (Fig. 3. General Interaction Framework) below, where only one attentional and motor channel is reserved for a user:

Figure 3.

General Interaction Framework Dix et. al., 2004.

In a digital system scenario (Fig. 3. General Interaction Framework), reobservation of the system output becomes necessary due to the interrupted attention and the contention of presentations. Some of the presentations –including current one- might have reduced effect or get cut out completely from attentional channel of user.

Figure 4.

Single System with multiple processes with individual attention demands.

Since device interaction significantly utilizes short term memory, and factor with adverse effect on short term memory impairs the interaction. User frustration, reevaluation of screens and forgetting the temporary task list are typical results of such interruptions. When multiple digital systems are concerned, the situation becomes more complicated, yet exhibiting the same problem. Multiple digital systems with attentional demand are depicted in (Fig. 4. Multiple digital systems each demanding user attention):

Figure 5.

Multiple digital systems: Each demanding user attention and send interruptions to the user.

Furthermore, when multiple digital systems are concerned, the problems of each single device with multiple processes are added automatically to the case above.

Advertisement

3. The Quest for Attentive Computing

In this section the quest for Attentive Computing (AC) is considered in detail. For that reason, following subjects are discussed by devising every subject a section.

        3.1. Problems Encountered in Modern Digital Life

        An arrogant cell phone is heard during a meeting. The attendees hesitate a moment and control their phones whether the phone ringing is theirs or not. Then, the owner of the device is cleared because he or she has to turn it off immediately not to attract too much attention and anger. As the owner of the phone lives an unavoidable embarrassment, other people’s attention is distracted and valuable meeting time might easily got spent for nothing.

        Elsewhere, people may live the frustration of screen saver activations in the middle of presentations. In an office, while the head of sale department tries to focus on and finish an urgent report about the first quarter profit of the company, he or she may receive an obtrusive update warning from the operating system of the computer or the virus protection program. One can complaint about how problematic the management of multiple windows is, how much he or she face difficulties to find something on the computers desktop, how hard to use pointer & mouse in large screen displays etc..

        It’s possible to augment the number of above scenarios and one can surely say that these scenarios and similar situations are quite widespread all around the world. We live in a modern digital life that computing systems are ubiquitous (Section 2.1) and there is a multiparty interaction between users and computing systems (Section 2.2).

        Lifestyle changes continuously as countries, companies and people act in a cruel competition. Latest innovations spread and affect the world in quite a short time. Whatever happens in the world is easily brought to the screens of people’s digital computing devices although sometimes it is not quite desired. Managing information that is produced globally and conveyed through computing systems is becoming more difficult as the “information overload” has been getting heavier.

        At a first glance, the ubiquity of computing systems and its natural result multiparty interaction may appear highly appealing and profitable to reader. Because, it may seem that having and using a number of computing systems with attractive properties and services is similar to have many ready to service and capable human assistants who never get tired at a low price. However, this is not the situation with computing systems at hand due to the isolated and inattentive behavior of current computing systems described in Section 3.2. They don’t behave like human assistants.

        While the price one pays to buy these systems decreases, the burden one has to carry to use increases contrarily. Vertegaal, the head of Human Media Laboratory of Queens University, explains this situation “Although the trend to use more computing devices may provide an opportunity for increased productivity, such benefit comes at a cost.” He defines this cost as the requirement to be available, at any time or place, in order to swiftly adapt to changes in our information environment Vertegaal, 2003.

        Early computing systems, in a sense, were like powerful calculators, capable typewriters or fast electronic scriniaries with limited communication ability. After the rise of internet and other world wide mobile communication technologies such as GSM, computing systems have become humans’ principal communication channels.

        Most people use simultaneously at least email, instant messaging and cell phone technologies several times throughout the day. People are almost always connected and open to communication with the rest of the world most of the time. Additionally, computing systems are semi-autonomous and have multi tasking capabilities. Together with the operating system, all software in the device needs to communicate with the user without any care of his or her appropriateness to communication. Unfortunately, it’s impossible to deny using or at least turning off computing systems due to economical and social demands of modern life in most cases McCrickard & Chewar, 2005.

        3.2. The Behavior of Current Digital Computing Devices

        Today, a bathroom faucet or a hand dryer can recognize the physical status of their user and regulate their work accordingly, digital systems, that are more intelligent in a technical sense, do not exhibit such “attentive” user sensing and servicing abilities. Computing systems live in a world that is isolated from the outside world. They are unaware of the user’s existence, proximity, context, actions, interests, goals, tasks and priorities. Strictly speaking, they are unaware of the physical, perceptual and cognitive state of the user.

        They are designed in such a way that they only gives a response back if one explicitly indicates something by means of classical input devices a keyboard and mouse, otherwise they stay uninterested Worse, when they have something to tell, they do not hesitate to interrupt user without any care to current user context or current task. Systems and specifically computing devices still work as the user’s single device, do not hesitate to confound the user’s cognitive resources like attention and working memory through interrupts that they send McCrickard & Chewar, 2003.

        Additionally, computing systems at hand also fall behind to support their users. It’s the responsibility of user to explicitly manage and control the screen real estate of the system and other resources. A large and complicated desktop, a pointer, tens of windows, lots of icons, many branching long menus, high resolution graphics etc. all wait for the “direct manipulation” of the user.

        As it’s thought the above attitudes of current computing systems, it can be said that they are “inattentive” or, by Gibbs’s words, inconsiderate systems Gibbs, 2005. Current computing systems are ill equipped to negotiate their communications with humans and bad in support Vertegaal et al., 2006. This is because of that they still utilize traditional direct manipulation techniques based on traditional graphical user interfaces (GUIs), where standards are specified relatively early in 1983 Nielsen, 1993. In traditional GUI principle, there is an explicit, object-driven and one-to-one interaction between the user and the system. The priority of interaction (locus of control) is given to device instead of user and his or her needs at this exclusive work style.

        3.3. The Need for Attentive Computing

        For a good understanding of the need for Attentive Computing (AC), it should first comprehend the following issues such that the cognitive limits and properties of humans (Section 2.3), the bad effects of interruptions on human performance (Section 2.4), the problems caused my modern digital life and the role of computing systems in these problems (Section 3.1), and the inattentive, arrogant and helpless behavior of digital computing systems (Section 3.2).

        When it’s considered together all above issues, it’s obviously seen that current computing systems are insufficient to meet the increasing needs of users in today’s modern digital life. However, these systems constitute an important part of users’ life. They should no longer behave like passive ordinary tools like a typewriters, cupboards, pens and papers. Because, as it’s considered the place of computing system in daily environment, the services provided and the time spending with them, they rather seems like partners or assistants.

        Yet, attentive and helpful assistants or “good assistants” are needed with the words of “Maglio & Campbell. They describe a good assistant as an assistant that is actively filter incoming information, communicate in an appropriate manner and, aware of the supervisor’s needs, goals and interest Maglio & Campbell, 2003.

        From a technical point of view, it is needed further development for interaction methods and new unobtrusive interaction channels between users and computing systems by considering social and individual behaviors. Direct manipulation, Graphical User Interfaces, WIMP (Windows, Icon, Menu and Pointer), classical input channels keyboards and mouse, even if they have served well so far, are showing their limits. They will off coarse continue to service but there is a need for computing systems that are:

              Advertisement

              4. Attentive Computing

              4.1. Attentiveness in the Literature

              Attentive Computing (AC) is a relatively new subject with respect to classical HCI computing paradigms. However, one can find variety of previous research work on AC in the literature in different names since the ends of nineties. The leading studies and subtopics can be named as Attentive User Interfaces, Attentive Information systems, Attentive Agents, Attention Aware Systems, Attention-Based User Interfaces, Attentional User Interfaces, Attention-Centric Notification Systems, Attentive Displays, Attentive Robots and Toys.

              The name of the proposed systems under AC notion may exhibit some differences depending on the type of computing system and the point of view of the author to the problem. As some authors approach to the problem from a “system” perspective like Roda & Thomas’s Attention-Aware Systems, others may consider it as user interface problem like Vertegaal and Attentive User Interfaces. In this chapter we have chosen to use the term AC as the basic computing paradigm that encompasses all the studies that are attentive and similarly, the term Attentive Computing System as an inclusive umbrella term for systems that have properties laid by AC.

              An interesting study that handles the subject as a computing paradigm but in a less scientific manner is conducted by Gibbs Gibbs, 2005. He throws the name “Considerate Computing” in article published in Scientific American. The article is referenced as a well organized study handling the subject from the popular point of view. Gibbs’ article includes different researchers’ opinions on the subject. It argues the reasons and problems of the current computing systems that are quite disrespectful and exhibit isolated behavior against their users. Consequently, it states that the computing systems to be considerate against their users.

              Yet, readers may wonder why the term “attentive” has been selected in this chapter instead of the term “considerate”. The answer lies on the fact that the term considerate is less formal than the term attentive whereas it points only to the behavior of computing systems. The term attentive, on the other hand, evokes the notion of attention which acts as a primary interaction channel of an ACS Selker, 2004.

              4.2. Definition and Properties

              Selker defines an Attentive User Interface (AUI), being perhaps the most popular Attentive Computing System (ACS), as context-aware human-computer interfaces that rely on a person’s attention as the primary input Selker, 2004. Vertegaal defines it as “a user interface that is sensitive to user’s attention” within in the introductory text of the special session that is dedicated to Attentive User Interfaces in CHI 2003 conference (Vertegaal, 2003).

              In this chapter we propose to expand the notion of ACS to a degree, by the definition given for AUIs in Dirican & Göktürk, 2008, and define an ACS as “A computing system that is sensitive to user’s cognitive resources with attention being foremost”. We believe that although AC propose to utilize the user’s attention in optimizing the interaction, it address entire cognitive resources of user like perceptual mechanisms and working memory.

              AC aims to create computing systems, called Attentive Computing Systems (ACSs), and behaves in harmony with their users. By preserving the user’s attention and other cognitive resources, ACS tries protect users from today’s ubiquitous pattern of interruptions Vertegaal et al., 2006. It’s done by unobtrusive negotiation and mediation, instead of imposing the messages. Besides, ACS tries to provide active support and assistance to their users by means of additional filtering and notification mechanisms relevant to user’s needs, goals, tasks, ongoing activities and priorities.

              4.3. Modified Interaction Framework

              The general interaction Framework discussed in Section 2.4, can further be extended to include ACS (Fig. 7: Modified General Framework of Interaction for an ACS). Modified framework includes attentional monitor that watches user attention and context, filters and mediates necessary information to be presented to user and also capable of exchanging and declaring attention data to other digital devices.

              Figure 6.

              Modified General Framework of Interaction for an ACS.

              When multiple digital systems are concerned, the case can be depicted as in (Fig. 8. Multiple digital systems with attentive user interfaces), where attention information is being exchanged between digital systems since some may lack required sensory capability to monitor user attention. Furthermore, exchange and fusion of attention data between digital systems would enable an even stronger mediation between the user and surrounding devices.

              Figure 7.

              Multiple digital systems with attentive user interfaces.

              Through mediation of attention, execution evaluation cycles are delayed until proper candidate has been elected ACSs implement this mediation on the basis of measures and models of the past, present and future state of the user’s attention Maglio et. al, 2000. They need new communication channels to obtain and contract these measures and models and new interaction methods to maintain the above behavior. We combine these models, channels and methods within our suggested PRO-D framework model of ACS.

              4.4. PRO-D Framework Model

              PRO-D is a suggested framework model for ACSs based on the five key features proposed by Shell et al. such that sensing attention, reasoning about attention, gradual negotiation of turns and augmentation of attention, communication of attention Shell et al., 2003.

              By focusing on PRO-D, we aim to provide a generic framework model for attentive computing and other parallel computing paradigms. Our model has four key stages (Fig. 9. Framework model PRO-D for Attentive Computing), that are perception of attentional of cues, reasoning about the attention, optimization of attention and declaration of attention. Optimization stage has two sub stages: regulation of interaction and augmentation of attention. We think that if a computing system is attentive, it ought to have the first two stages exactly, and at least one of other two stages present. Otherwise, systems that do not preserve, support or declare the user’s attention do not bring anything attentive.

              Figure 8.

              Framework model PRO-D for Attentive Computing.

              4.4.1. Perception of Attentional Cues

              In this stage, the user and the environment is monitored from several channels in order to obtain attentional cues about the user’s current focus of attention. In other words, ACS tries to gather necessary information to be used in the next stage by monitoring the user.

              Although, ACS take advantage of the classical explicit input channels mice, keyboard and joystick to track the user, they need richer input information. Enhanced multichannel or multimodal user information is required in order to implement an ACS that can determine the current status of the user, the place where user is in, focused human, device, object, etc. of attention.

              Popular methods utilize gaze tracking where previous research suggest that people look at what they attend in most cases (Zhai, 2003). Maglio et al. did a study about how people point the computing devices during both verbal and non-verbal interaction. The results of their study suggest that, people use command phrases when they make do something the computing devices. They barely say the name of device. Instead, most of people look at the devices before or after giving the command. These findings confirm that eye tracking has a critical importance to understand the cognitive interests of users.

              Other ways of collecting user information include speech recognition, presence detection, proximity detection, gesture detection and even posture detection. Heart rate variability (HRV), electroencephalography (EEG) and electrocardiography EOG studies in HCI also suggest that these can also convey user status and cognitive load data if processed correctly Chen & Vertegaal, 2004, - Rowe et al., 1998.

              ACSs and similar computing paradigms that collect multimodal information that sometimes include audio and facial video ultimately face a privacy protection problem. Since, any inference about the user requires various personal status information, evaluation of the data while keeping it private needs to be addressed carefully. Saving, protecting and gaining the user confidence is an important an open problem of AC Shell, 2002. Few studies address privacy issues in AC where many others never mention about. On the contrary, addressing privacy is a key in gaining confidence of users. To this end, it’s proposed to take advantage of the results of similar privacy done for computing paradigms and especially within ubiquitous computing.

              4.4.2. Reasoning about Attention

              Maglio et al. define ACS are systems that pay attention to what users do so that they can attend to what users need. In order to optimize the user attention, a system should first detect it. In this stage what the user is attending and his cognitive status are tried to be specified. By means of this information, users are provided more social interaction with computing devices, relevant information to their needs and goals, support with respect the his or her focus of attention.

              AC uses models that incorporate users’ past, present and future attention characteristics and specifies future interest and goals. Information gathered in perception stage is used as input to these models. Bayesian networks and influence diagrams are among the most popular methods for modeling in AC Horvitz et al., 2003. Heuristic approaches, statistics and predefined user priorities are other methods used in the literature Vertegaal et al., 2006.

              Horvitz’s Priorities is an attentional user interface that is a good example of reasoning about attention Horvitz et al., 1999. The application decides on which received mails into a desktop computer of the user will be delivered to a mobile computing device. Priorities system makes this decision with respect to the user’s mail replying frequency and main response time for the sender of interest. Thus, the system specifies the user’s attention and interest to senders through the mail replying frequency and main response time.

              Attentive Displays reason about what the user is looking at the display though the information gathered from eye gaze detection Zhai, 2003. Maglio et al.’s SUITOR reason about their user attention and interests by means of eye gaze detection, application use, web browsing and user’s email content. It supports users with suitable information that is related to their interests. Traffic lights, as an interesting example beyond personal computers and mobile computing devices, reason about the traffic density on the active information come from coils on the road and the statistics of normal traffic flow Vertegaal, 2002.

              Here the key word is user’s interruptibility. Even if the user’s focus of attention may not be specified exactly, whether the user is suitable for an interruption may be specified by different ways. Fogarty et al., for example, do this by using simple sensors and their success is %82.4 Fogarty et al. 2005. Chen & Vertegaal do this by means of humans’ physical properties EEG and HRV, and they specify four stages for a user Chen & Vertegaal, 2003.

              4.4.3. Optimization of Attention

              The optimization of attention means effective and efficient use of user’s valuable attention and other cognitive resources. This is done in two stages such that regulation of interaction where it’s aimed to preserve the user cognitive resources and provide a natural mechanism to attention switching to user by means of turn-taking paradigm. The other stage is the augmentation of attention where it’s generally aimed to support the user cognition by means of cocktail party effect.

              4.4.3.1. Regulating of Interaction

              Regulation of interaction is done on the information of user’s current cognitive load, interests and goals specified in previous stages. AC implement turn taking paradigm, which coordinates the reaction timing of interfaces based on the attentional information. This process is also called as the gradual negotiation of turns in human group conversations (Vertegaal, 2003).

              When multi person human group conversation takes place, people can easily specify the timing of speech, when to speak and when to be silent, usually by using their eyes, extracting contextual information and recognizing facial gestures. Turn taking process regulates the attentional demand of each interlocutor and enables smoother interaction between pairs.

              AC tries to imitate turn taking process in computing systems in a similar manner. When an ACS decides to convey information to user, it first evaluates the importance and urgency of its own desire regarding user’s focus of attention. Then with respect to the result of evaluation, the system signals its desire from a peripheral channel to user. AC waits for the user’s approval before take essential information to foreground. SUITOR does it by using a one-line scrolling text display located at the bottom of the screen Maglio et al., 2001.

              4.4.3.2. Augmentation of Attention

              AC tries to augment and support the user’s attentional resources by imitating the Cocktail Party Effect phenomenon. This effect is the ability of focus on a particular conversation among many others in crowded places like parties. If we would not have such a capability, we would have serious difficulty in noisy places. ACS in an analogous fashion aims to let the user to focus his or her desires by attenuating peripheral details and highlighting the information to be focused. Bolt’s Gaze Orchestrated Dynamic Windows (Section 5.2) and Attentive Displays Zhai, 2003 are good examples to augmentation of attention.

              4.4.4. Declaration of Attention

              Notification of the specified user attention to other users and devices is considered as attention declaration. By means of this declaration, even if some of the devices can not monitor the user, they would be ensured that whether the user suitable for communication or not Shell & Selker, 2003. Attentive Cell Phone application (Section 5.5) is an example to declaration of attention to other devices with eyeReason server Vertegaal et al., 2002.

              Advertisement

              5. Attentive Computing Systems

              5.1. Eye Contact Sensors: Eye-R & eyeConctact

              Eye-R Selker, 2001 and eyeContact Vertegaal et al., 2002 are similar, low-cost, calibration free, wearable eye contact detection sensors. They are mainly indented to gather and deliver information about the person’s visual attention.

              Eye-R does this by sensing eye fixations. It use infrared red beams to detect eye movements when the eyes fixate on an object, it is usually considered good indication of intentional information. System also detects when a user orients his/her head toward to another person, device or appliances that wear Eye-R.

              eyeContact uses infra red beams but in a different manner to detect the visual attention of users. It detect user’s pupils in its filed of view. eyeContact is based on IBM’s PupilCam. It has infrared LEDs on the camera that cause a bright pupil reflection and another set of LEDs cause the black pupils in eyes within range. By syncing the LEDs with the cameras clock a bright and black pupil effect is produced in the sequential video frames. Through a simple computer vision algorithm pupils are detected by subtracting odd and even frames. eyeContact sensor is said to have a 2 meters range. eyeContact sensor has the ability whether there is a person that looks at or maintain eye contact with the sensor by nature.

              Both sensors are also able to deliver the obtained information about their user visual attention to a server over wireless TCP/IP connection.

              5.2. Gaze-Orchestrated Windows

              This application realized in 1985 by Rick Bolts is broadly accepted as the first serious Attentive User Interface (AUI) application Bolt, 1980. This system is a good example to the augmentation of user attention by attenuating the unnecessary details. In this application, a user looks at a wide screen where 40 movie episodes were playing at the same time. Separate soundtracks from ever episode played simultaneously to form a cocktail-party effect. System understands where user looks on the screen by means of a couple of eye tracking glasses. As system detects which episode the user is looking at, the soundtrack of the intended episode is increased and other are decreased. Soon after, if the interest of user continues on this episode (continue to look at), it’s ensured by the system the episode to cover the whole screen.

              5.3. GAZE

              GAZE is an Attentive Groupware system that aims to provide more efficient cooperative work by supporting gaze awareness in multiparty mediated communication and collaboration Vertegaal, 1999. At the experimental study, four people were in a group conversation in a 3D virtual conference room. Every person is represented by their avatars to others. The difference of the system according to an ordinary one is that the information about that participant with the visual attention of group is displayed. Thus, it’s aimed at the participant to do more natural group conversation remotely.

              5.4. SUITOR

              SUITOR is one of the first remarkable Attentive Information Systems (Maglio et al., 2001) and it is developed as an extensible framework for building Attentive Agents Maglio & Campbell, 2003. The name SUITOR is the contraction of “Simple User Interest Tracker”. The major objective of SUITOR is to inform its user according user’s ongoing tasks, priorities and goals detected by the system without disturbing him or her. This detection is made means of eye-gaze tracking, application use, web browsing, email content, keyboard and mouse input. For that reason, the application is important with regard to monitor the user from many different channels. After SUITOR gathered necessary information about the user’s interests, it looks for relevant information and provides this information to user in a unobtrusive manner by using a ticker tap display at the peripheral of the display. There are similar attentive information and notification systems in the literature such that Scope Dantzich et al., 2002, Fred Vertegaal et al., 2000 and Attention-Aware Peripheral Display Park et al, 2009.

              5.5. Attentive Cell Phones

              Attentive Cell Phones are the phones that can understand whether their user is in a face-to-face conversation by eye sensor and voice analysis Vertegaal et al., 2002. This study is a well organized example of declaration of attention. The attentive phone sends the attentional information to a eyeReason Vertegaal et al., 2002 server. Server saves the information of all other people connected to the system. Thus, all users can obtain and use the information of whether a person is in a face-to-face conversation or not. By using this information in addition to the user’s own phone other people can have the advantage of to regulate their communication more effectively.

              5.6. MAGIC

              MAGIC is the name developed for Manuel Gaze Input Cascade Zhai et al., 1999 . It is an Attentive Pointing System where it is ensured that the mouse pointer is sent automatically to the point where the user looks on the window. Thus, it’s aimed at the user to be saved of the mouse eye coordination problem and to make faster selection. Experimental results showed that this method is faster than classical mouse approach. However, Midas Touch problem, inadvertent actions present challenges in these systems.

              MAGIC is a system that eases the usage of graphical user interfaces by sensing the visual attention. Thus, it augments and supports the user’s attention. A similar application in the literature is eyePoint Kumar et al., 2007 with similar capabilities.

              5.7. Pong

              Pong is an Attentive Toy or Robot that pays attention to people so it can attend to people’s needs using visual and audio sensors Haritaoğlu et al., 2001. It is also considered as an Attentive Agent by (Maglio & Campbell, 2003). Pong is able to monitor user actions, react accordingly and convey attention and emotion. Pong can detect and track multiple people in a scene by means of real-time video and audio processing techniques, and speech recognition. It can maintain eye contact with people. Pong can express emotion through mimics or facial expressions like happiness, sadness, surprise and confusion etc.

              PONG does these by means of its moving head, ping-pong eyes and artificial lips. Thus, PONG can develop a natural communication with people in a way that human uses. This means exactly to throw away the machine from their isolated autonomous worlds and to enable the machines and humans to work together more as partners Haritaoğlu et al., 2001.

              5.8. Gaze Contingent Display

              A Gaze Contingent Display Reingold, 2002 is an Attentive Display Baudish et al., 2003 that dynamically adjusts their work according to the user’s focus of attention. While these displays provide high resolution in the area of the screen at which the user looks, they provide lower resolution elsewhere. The measure of the area that will be shown in high resolution is specified by eye-gaze tracking and the user’s perceptual span. Thus, both user’s and computer’s processing capacities are optimized over the most important information, that is the information that the user is interested in. Users are offered the high resolution information on the screen by filtering peripheral ones and the computer are free from to waste its processing power on peripheral information. These displays are used in many fields like simulators, virtual reality, remote piloting, and telemedicine Baudish et Al., 2003. In the literature there are some other similar attentive display implementations such that Focus Plus Context Screens, Real-Time 3D Graphics, Easily Perceived Displays Baudish et al., 2003

              Advertisement

              5. Conclusion

              In this chapter, interaction problems that are parallel to the increase in number and variety of digital computing systems are discussed in detail. In today’s ubiquitous computing age, users are in a multiparty human computer interaction with computing devices. This renders the existing interaction channels and methods to be insufficient.

              Users have many difficulties because of the modern lifestyle. The information that is needed to be processed by people is beyond the humans’ mental or cognitive capacities most of the time. They are requested to be available and open to connection 7/24. Furthermore, current interaction design approaches and consequently computing systems designed accordingly augment these problems. They don’t have individual or contextual sensitivity for user. They allow and redirect any incoming information towards their users without any filtering or mediation for unnecessary details according to the user’s need, context and goals. They bombard their users by interruptions. One has to understand that every interruption is a cost to user. The work flow is broken up and user has to face a performance lost, frustration and reeveluation necessity of what has been going on for the past few seconds or more.

              As a response to those problems, Attentive Computing (AC) proposes several solutions. AC promises a system that is sensitive to their users’ needs and goals. It proposes a computing system that gracefully negotiates the volume and timing of user interruptions and messages instead of imposing them. It also proposes a system that helps and supports their users in their workplace by attenuating unnecessary and irrelevant information.

              Strictly speaking, the well-known execution-evaluation cycle does not work properly anymore because of the ubiquitous pattern of interruption. Every interruption prevents the user from both observing the output of the computing system and articulating the input into the system. In case of interruption users have to start a new cycle in order to respond to the interruption. Then, after finishing the cycle caused by interruption, they have to turn back to the point that they leave the previous cycle. We have proposed an Attentive Computing Interaction Framework as a solution to this issue. An attentive system, while the user is in high attentive state, as he or she observes or articulates, may either delay a new interruption or show them from a peripheral channel or even highlight the current one.

              AC paradigm is a relatively a new subject in Human Computer Interaction (HCI) field. Since the last 20 years, research done by a limited number of researchers on the subject. The problems of AC are still open problems to address. Yet, there are few valuable studies focusing on the classification, usability and frameworks of Attentive Computing Systems (ACS). Devising new user tracking methods and advancing the existing ones are also promising fields.

              Maintaining the protection of privacy and user confidence to ACS is another significant research problem. There few studies on user privacy within AC. For this purpose, it is proposed to be benefited from the studies done within other computing paradigms. Studies and findings show that AC is important and has attractive properties for the future of invisible computing.

              ACS have the potential of easing the computer usage. They save digital computing devices from their disruptive behaviors and enable them to behave more social. They support users in many different ways and allow them to focus their tasks instead of the interface itself. We therefore believe that the invisibility of any interaction artifact depends on the way that it is presented.

              References

              1. 1. Bailey B. P. Konstan J. A. Carlis J. V. 2000 Measuring the effects of interruptions on task performance in the user interface, IEEE International Conference on Systems, Man, and Cybernetics, 2 , 757 762 , 2000
              2. 2. Bolt R. A. 1980 Put that there: Voice and gesture at the graphics interface. ACM Computer Graphics, 14 3 262 270
              3. 3. Bolt R. A. 1981 Gaze-Orchestrated Dynamic Windows, Computer Graphics, 15 3 109 119
              4. 4. Borriello G. 2000 The Challenges to Invisible Computing, Integrated Engineering, November, 2000
              5. 5. Baudish P. De Carlo D. Duchowski A. T. Geiser W. S. 2003 Focusing on the Essential: Considering Attention in Display Design, Communications of ACM, 46 3 60 66 , 2003
              6. 6. Buxton W. 2001 Less is More (More or Less), The Invisible Future: The seamless integration of technology in everyday life., in P. Denning (Ed.), New York: McGraw Hill, 145 179 .
              7. 7. Chen D. Vertegaal R. 2004 Using Mental Load for Managing Interruptions in Physiologically Attentive User Interfaces, Proceedings of the ACM CHI’04, 1513 1516 , 2004
              8. 8. Dantzich M. Robbins D. Horvitz E. Czerwinski M. 2002 Scope: Providing awareness of multiple notifications at a glance, In Proceedings of Advanced Visual Interfaces, 2002
              9. 9. Dirican A. C. Göktürk M. 2008 Dikkatli Arayüzler, ASYU:Akıllı Sistemlerde Yenilikler ve Uygulamaları Sempozyumu, Haziran 2008
              10. 10. Dix A. Finley A. Abowd G. D. Beale R. 2004 Human Computer Interaciton, 3rd Edition, Printicehall, 2004
              11. 11. Fogarty J. 2004 Sensor-Based Statistical Models of Human Interruptibility, IBM Research Human-Computer Interaction Symposium, 2004
              12. 12. Fono D. Vertegaal R. 2005 EyeWindows: Evaluation of eye-controlled zooming windows for focus selection, In Proceedings of CHI 2005, 2005
              13. 13. Gibbs W. 2005 Considerate computing, Scientific American, January, January 2005, 54 61
              14. 14. Haritaoglu I. Cozzi A. Koons D. Flickner M. Zotkin D. Duraiswami R. Yacoob Y. 2001 Attentive Toys, IEEE International Conference on Multimedia and Expo, 917 920 , August 2001
              15. 15. Horvitz E. Jacobs A. Hovel D. 1999 Attention-sensitive alerting, In Proceedings of UAI’99, 305 313 , 1999, Stockholm
              16. 16. Horvitz E. Kadie C. Paek T. Hovel D. 2003 Models of attention in computing and communication: from principles to applications, Communications of ACM, 46 3 52 59 , 2003
              17. 17. Jacob R. J. K. 1990 What You Look At is What You Get: Eye Movement-Based Interaction Techniques, Proceedings of the ACM CHI’90, 1990, Seattle, 11 18 ., Washington, USA
              18. 18. Kumar M. Paepcke A. Winogard T. 2007 eyePoint: Practical Pointing and Selection Using Gaze and Keyboard, In Proceedings of CHI 2007, San Jose, April 2007, California
              19. 19. Maglio P. Matlock T. Campell C. S. Zai S. Smith B. 2000Gaze and Speech in Attentive User Interfaces”, Lecture Notes in Computer Science, 2000, 1948 1 7
              20. 20. Maglio P. P. Campbell C. S. Barrett R. Selker T. 2001 An architecture for developing attentive information systems, Knowledge-Based Systems, 14 2001, 103 110
              21. 21. Maglio P. P. Campell C. S. 2003 Attentive Agents, Communications of ACM, 2003, 46 3 47 51
              22. 22. Mc Crickard D. S. Chewar C. M. 2003 Attuning notification design to user goals and attention costs, Communications of ACM, 46 3 2003, 67 72
              23. 23. Mc Farlane D. C. 1999 Coordinating the Interruption of People in Human-Computer Interaction, INTERACT’99, 1999, 295 303
              24. 24. Nielsen J. 1993 Non-command User Interfaces, Communications of the ACM, 1993, 36 4 82 99
              25. 25. Park S. Park S. S. Lim Y. Lee G. Hahn M. 2009 Designing Attention-Aware Peripheral Displays with Gaze-Based Notification Control, CHI 2009, Boston, April 4 9 , MA, USA
              26. 26. Perdikis S. Tzovaras D. Strintzis G. M. 2008 Recognıtıon Of Human Actıvıtıes Usıng Layered Hıdden Markov Models, 2008 IAPR Workshop on Cognitive Information Processing, June 9-10, 2008, Santorini, Greece
              27. 27. Pickard R. W. 1998 Affective Computing, MIT Press, 100262661152
              28. 28. Reingold E. M. , Loschky. L. C. Mc Conkie. G. W. Stampe D. M. 2003 Gaze-Contingent Multiresolutional Displays: An Integrative Review, Human Factors: The Journal of the Human Factors and Ergonomics Society, 45 2 307 328 , Summer 2003
              29. 29. Roda C. Thomas J. 2005 Attention Aware Systems, Encyclopaedia of HCI, IDEA Group
              30. 30. Roda C. Thomas J. 2006 Attention Aware Systems: Theories, Applications, and Research Agenda, Computers in Human Behavior, 2006, 22 4 557 587
              31. 31. Rowe D. W. Sibert J. Irwin D. 1998 Heart rate variability: indicator of user state as an aid to human-computer interaction, Proceedings of the SIGCHI conference on Human factors in computing systems, 480 487 , April 18-23, 1998, Los Angeles, California, United States
              32. 32. Selker T. Burlesson W. 2000 Context-aware design and interaction in computer Systems, IBM Systems journal, 39 3-4 880 891 , July, 2000,
              33. 33. Selker T. Lockerd A. Martinez J. Burleson W. 2001 Eye-R, a Glasses-mounted Eye Motion Detection Interface, In Proceedings of ACM SIGCHI Conference on Computer Human Interaction, 2001
              34. 34. Selker T. 2004 Visual Attentive Interfaces, BT Technology journal, 22 4 2004
              35. 35. Shell J. S. 2002 Taking Control of the Panopticon: Privacy Considerations in the Design of Attentive User Interfaces, CSCW 2002 Conference on Computer Supported Collaborative Work, 2002, New Orleans
              36. 36. Shell J. S. Vertegaal R. Skaburskis A. 2003a EyePliances: Attention-Seeking Devices that Respond to Visual Attention, In Extended Abstracts of ACM CHI 2003 Conference on Human Factors in Computing Systems, 2003
              37. 37. Shell J. S. Selker T. Vertegaal R. 2003b Interacting with Group of Computers, Communications of ACM, 46 3 40 46 , 2003
              38. 38. Shneiderman 1993 Designing the Interface, 3rd Edition, Addisson Wesley, 1993
              39. 39. Vertegaal R. 1999 The GAZE groupware system: Mediating joint attention in multiparty communication and collaboration, In Proceedings of CHI’99, 1999, Pittsburg
              40. 40. Vertegaal R. Slagter R. Van der Veer G. C. Nijholt A. 2000 Why Conversational Agents Should Catch the Eye, In Extended Abstracts of CHI’2000, 2000, 257 258 , The Hague, Netherlands
              41. 41. Vertegaal R. 2002 Designing Attentive Interfaces, In Proceedings of ACM ETRA Symposium on Eye Tracking Research and Applications, 2002, ACM Press, New Orleans
              42. 42. Vertegaal R. Dickie C. Sohn C. Flickner M. 2002 Designing attentive cell phone using wearable eyecontact sensors, In Extended Abstracts of CHI’02, 2002, 646 647
              43. 43. Vertegaal R. 2003 Introduction, Communications of ACM, 46 3 30 33 , 2003
              44. 44. Vertagaal R. Shell J. S. Chen D. Mamuji A. 2006 Designing for augmented attention: Towards a framework for attentive user interfaces, Computers in Human Behavior, 2006, 22 771 789
              45. 45. Yamaoka F. Takayuki Kanda. Hiroshi Ishiguro. Norihiro Hagita. 2009 Developing a model of robot behavior to identify and appropriately respond to implicit attention-shifting., HRI 2009, 133 140 , 2009
              46. 46. Weiser M. 1991 The computer for the 21st century. Scientific American, 94 100 , 1991, September
              47. 47. Zhai S. 2003 What’s in the eyes for attentive input, Communications of ACM, 46 3 34 39 , 2003
              48. 48. Zhai S. Morimoto C. Ihde S. 1999 Manual and gaze input cascaded (MAGIC) pointing, In Proceedings of CHI’99, 246 253 , 1999

              Written By

              Ahmet Cengizhan Dirican and Mehmet Gokturk

              Published: 01 December 2009