Open access peer-reviewed chapter

Introduction to Intelligent User Interfaces (IUIs)

Written By

Nauman Jalil

Submitted: 18 January 2021 Reviewed: 19 April 2021 Published: 15 June 2021

DOI: 10.5772/intechopen.97789

From the Edited Volume

Software Usability

Edited by Laura M. Castro, David Cabrero and Rüdiger Heimgärtner

Chapter metrics overview

822 Chapter Downloads

View Full Metrics

Abstract

This chapter is intended to provide an overview of the Intelligent User Interfaces subject. The outline includes the basic concepts and terminology, a review of current technologies and recent developments in the field, common architectures used for the design of IUI systems, and finally the IUI applications. Intelligent user interfaces (IUIs) are attempting to address human-computer connection issues by offering innovative communication approaches and by listening to the user. Virtual reality is also an emerging IUI area that can be the popular interface of the future by integrating the technology into the environment so that at the same time it can be more real and invisible. The ultimate computer interface is more like interacting with the computer in a dialog, an interactive environment of virtual reality in which you can communicate. This chapter also explores a methodology for the design of situation-aware frameworks for the user interface that utilizes user and context inputs to provide details customized to the activities of the user in particular circumstances. In order to comply to the new situation, the user interface will reconfigure itself automatically. Adjusting the user interface to the actual situation and providing a reusable list of tasks in a given situation decreases operator memory loads. The challenge of pulling together the details needed by situation-aware decision support systems in a way that minimizes cognitive workload is not addressed by current user interface design.

Keywords

  • Intelligent User Interfaces (IUIs)
  • Human-Computer Interaction (HCI)
  • Graphical User Interfaces (GUI)
  • Ubiquitous Computing (Ubicomp)
  • User-Centered Design (UCD)

1. Introduction

The methods by which people have interacted with computers have made great strides. The journey continues and new designs of systems and technologies emerge more each day, and in the last few decades, research in this field has expanded quite rapidly. Artificial intelligence tries to simulate human abilities with the help of machines. This challenge, launched at the famous Dartmouth conference in 1956, has attracted a great deal of interest and effort from the research community over the last 50 years. While the community has not been able to accomplish the euphoric expectations presented at that conference even after so many years, a good number of accomplishments were achieved in this simulation of human capabilities with computers, embracing the progression of the artificial intelligence field.

One of the cornerstones of simulating human abilities, without any doubt, is communication. Communication between humans, between machines, or between humans and machines can be realized in this scenario. In communication theory, communication between humans has indeed been studied for several years, communication between machines is still currently a major topic in conferences, with a peculiar focus on ontologies and communication acts. Finally, in the Human-Computer Interaction discipline, communication between humans and machines is being studied. The design, implementation and assessment of user interfaces are discussed in this discipline.

From the first command-based user interfaces to the most advanced graphical ones, the brief, but extremely active, history of computer science has witnessed a real revolution in user interfaces. The computational balance is now leaning towards interaction, cultivating an increasing interest in user interfaces. At the same time, the growing power and sophistication of user interfaces is encouraging the creation of techniques and modalities of interaction closer to human beings’ cognitive models. In the pursuit of these goals, several different artificial intelligence methods have been incorporated into user interfaces over the last fifty years to provide a more natural and efficient interaction between humans and computers.

One of the key parts of an application is the user interface. If it is designed correctly, when communicating with the computer, the user may feel relaxed. On the other hand, if an application is capable of performing the tasks for which it was intended, the application would still not be acceptable to users unless it is capable of interacting in an intelligible and functional way with the user. There is a real interest in enhancing communication between the user and the machine within the Human-Computer Interaction group, recognized by the large number of researchers dedicated to the study of techniques intended to improve usability [1] in a user interface.

The Human Computer Interaction research group is looking for how to enhance the user’s feeling from the user interface in the quest to build user interfaces with a high degree of usability. One main problem to be tackled in this pursuit is to make the interaction even more natural. This growing interest in making interaction even more natural means raising the standard of communication between humans and machines. The fundamental concepts of natural interaction have led to the exploration and implementation of a large number of techniques in interaction design coming from Artificial Intelligence, enabling, among many other things, more detailed responses to the actions of users. The term Intelligent User Interfaces have been created by this integration of Artificial Intelligence techniques within Human-Interaction Techniques, where this work is immersed.

The framework for the design of situation-aware interfaces are also covered in this chapter, in such a way that input information (context and environmental indications) can be specifically taken into consideration in the task specification [2]. It is believed that the designer adds abstract UI components to the task model to improve a concrete user interface (UI). This information is platform-independent, so that this information can ultimately be used by the rendering back-end to build a specific UI for different platforms. The next phase includes developing the model for dialogue. In order to simplify the work of designers, designers can be helped by automatically creating the state and transformations between the different individual dialogues. The method provides an algorithm to measure the various dialogues and transformations from the mission definition between dialogues. These transformations can be modified, incorporated or omitted by designers according to the outcome of a previous testing stage or the expertise of the designers. This allows situation-aware UI programmers to exploit transformations caused by changes in the situation. Therefore, programmers have power over the effect of the condition on the UIs’ usability. In [3] research focuses on developing experiences of visual context-aware services by integrating approaches from computer vision and artificial intelligence. It has close connections to the field of intelligent user interfaces.

The touchscreen keyboard is the most prevalent intelligent user interface on modern cell phones, and it is vital for mobile communication. Working to develop smarter, more effective, easy-to-learn, and enjoyable-to-use keyboards has raised a slew of intriguing IUI interface and research questions [4]. The progress and open research questions over the last decade in text input, emphasis on and directly dealt with through publications, including robotics and estimation cost-benefit equations [5], the significance of human performance models in the creation of error-correction algorithms and the potential of machine/statistical intelligence [6, 7, 8, 9, 10, 11], the ramifications of spatial scaling from a phone to a watch on human-machine labour separation, consumer behaviour and learning creativity, and the complexities of assessing the longitudinal impact of personalization and adaptation are discussed in [12, 13, 14, 15, 16, 17]. The aim of this study is to show that intelligent user interfaces, or the integration of artificial intelligence and human factors, are the future of human-computer interaction and information technology in general.

By varying decision factors’ forms, numbers, and values, this article [18] provides a mechanism for adaptive, measurable decision making for Multiple Attribute Decision Making (MADM). This research can be used to help designers create intelligent user interfaces for HCI decision-making applications that respond to user experience and decision-making efficiency. In [19], a Genetic Programming-based technology is proposed for automating crucial design phases. Designers can specify simple content elements and ways to merge them in this method, which will then be automatically composed and checked with actual users by a genetic algorithm to find optimal compositions.

Advertisement

2. Human computer interaction

The idea of Human-Computer Interaction/Interfacing (HCI) was automatically represented with the advent of the computer or, more generally, the system itself, also referred to as Man-Machine Interaction or Interfacing. In fact, the explanation is straightforward: most advanced machines are useless unless men can use them properly. The main ideas that should be considered in the development of HCI clearly present in this simple argument: functionality and usability [20].

In the end, why a system is actually built can be determined by what the system can really do, i.e. how a system’s function can assist in achieving the system’s purpose. A system’s functionality is characterized by the collection of actions or services it offers its users with. After all, the functionality value is only observable until it becomes feasible for the user to be used effectively [21]. The usability of a system with any functionality is the extent and degree to which the system can be used easily and reasonably by certain users to achieve specific objectives. If there is an appropriate equilibrium between the functionality and usability of a system, the real productivity of a system is achieved [22].

Taking these definitions into account and knowing that in this context, the term system, machine and computer are often used interchangeably, HCI is a design that should build a match between the user, the machine and the services needed in order to achieve a certain output both in terms of quality of service and optimality [23]. Most of it is subjective and context based to decide what makes a certain HCI design successful. An aircraft model design tool, for instance, must have precision in the display and design of the components, whereas graphics editing software doesn’t really require such accuracy. The technologies available may also have an effect on how various types of HCI are configured for the same reason. For example, commands, menus, graphical user interfaces (GUIs) or virtual reality can be used to access the information on a given device. A more detailed description of current techniques and technologies used to communicate with computers and the latest developments in the field is provided in the next section.

Advertisement

3. Intelligent user interfaces

Intelligent user interfaces are human-machine interfaces whose purpose is to enhance the performance, affectivity, naturalness and, in general, usability of interactions between humans and machines representing, reasoning or performing on a collection of models (user, domain, dialog, voice, functions, etc.). The design of user interfaces is a multidisciplinary challenge due to the numerous models coming from different disciplines (see Figure 1). Artificial Intelligence leads to developing collaboration with intelligence modeling methods, Software Engineering corresponds to coherent systems, notations and formal languages. Consequently, the user’s concern leads to Human-Computer Interaction, and hence the strategies for designing practical user interfaces.

Figure 1.

Various disciplines in the conception of intelligent user interfaces [24].

In order to fulfill the key purpose of intelligent user interfaces and to help them in various situations, they need to be able to reflect the information they have about users, the activities they are allowed to do across the user interface, the usage sense in which the user communicates with the program and has the ability to properly interpret the inputs and produce the outputs based on all the data gathered and the information they have [25]. The topic of usage is generally defined by presenting a model of the application users’ strengths, skills and interests, the channels on which those users communicate with the application (both the hardware and the software platform), and the physical world in which the interaction takes place, such as luminosity, noise level, etc. In [26], the task actually performed by the client is often included as a first order element in the scope of use. Since the current role is included in the framework of use, given a particular context of use, it is possible to better explain the reactions the machine would give. It can, for example, be used to build context-sensitive support systems quickly. It is very helpful to include the task in the context of use. It is not always feasible, though, to provide a high level of assurance about what the individual is actually carrying out. According to the interaction data obtained from the client, the authors add certain heuristics to a task model to figure out which is the task that is actually being done. The heuristics work on the premise that the possible tasks that the user executes at a given time are just the set of tasks “enabled” in the actual presentation in which the user communicates at all times [27].

3.1 Intelligent user interfaces goals

In Figure 2 a few of the most critical issues encountered in intelligent user interfaces are illustrated [28]. These challenges are targeted at achieving the ultimate target of intelligent user interfaces: improving the overall usability of the machine.

Figure 2.

The most common objectives seen in intelligent user interfaces [24].

As in Figure 2 the reader will observe that certain human capacities are included, such as understanding, adaptation, logic or simulation of the world. Many of those strengths are study lines established from the very beginning of this field of Artificial Intelligence. Many academic reports on artificial intelligence claims that intelligence is a program that can be executed regardless of the platform on which it is executed, irrespective of whether it is a machine or a brain. Two models, the user conceptual and the execution of one of the machines, must create a sort of equilibrium and comprehension when the user communicates with an item. The user uses a concrete vocabulary of operation defined by the input devices and the metaphor for which the user interface has been developed. On the other side, by conforming the contents, displaying it, the artifact must interpret the feedback and respond to the interaction, and it should also be able to determine the interaction mechanism itself and make conclusions as to how useful it is.

There are several areas of Artificial Intelligence that bring approaches, strategies and ideas of vital importance to the development of intelligent user interfaces. Methods for providing learning capacities such as neural networks or Bayesian networks can be used in Artificial Intelligence, knowledge representation methods such as semantic networks or frames, decision models such as fuzzy logic, expert systems, case-based reasoning and decision trees, etc. We may work on the user interface by applying these inputs to the intelligent user interface by facilitating transformation, making its use simpler, analyzing interaction, simulating activities, leading the user or assisting the developer of the user interface.

Advertisement

4. Ubiquitous computing and ambient intelligence

The most recent HCI study is unquestionably ubiquitous computing (Ubicomp). The phrase frequently used by ambient intelligence and pervasive computation interchangeably, refers to the ultimate human-computer interface techniques, which are the elimination of a desktop and the integrating of the computer in the environment in order to make it unseen to humans when covering them everywhere. Mark Weiser first proposed the concept of ubiquitous computing during his time as chief technologist at Xerox PARC’s Computer Science Lab in 1998. His vision was to incorporate machines into the world and ordinary objects anywhere so that individuals could connect with several computers at the same time because they are hidden to them and communicate with each other wirelessly [29].

The Third Era of computation has also been called Ubicomp. The first era was the mainframe era, with one machine and several people. Then it was the Second Era, one user, one computer, named the PC era, and now Ubicomp introduces more machines, one person era [29]. The major developments in computation are seen in Figure 3.

Figure 3.

Major trends in computing [29].

Advertisement

5. Intelligent user interface development

The predominant trend since some years ago is model-based user interface development (MB-UID) in software product engineering and user interface is not an exception [30, 31]. The key concept behind this pattern is to define a set of templates representing the characteristics of the user, the activities, the domain, the sense of usage, and the user interface at various levels of abstraction in a declarative way. A model is a generalized representation of a portion of the world called the system. All of such models are typically stored in the description language of XML-based user interfaces. For intelligent user interfaces, XML has become a de facto lingua franca [32]. Following this strategy, intelligent user interfaces [33] and hypermedia systems [34] are now being developed today.

Knowledge bases store the information that the device has on both the program being performed and the sense of use. During the various stages of design, the information concerning the program is often collected. In our case, the meta-model used to store information is a slightly updated version of the usiXML1 definition language of the XML-based user interfaces. While description languages like XIML2 or UIML3 may be used for other user interfaces, too. The user interface information involves:

  • The domain model has to hold certain objects/data the user interface requires for the user to conduct his activities.

  • The task model that represents the activities to be done by the user using the user interface and the temporal limits between those activities. A notation based on ConcurTaskTrees is used to model it [35].

  • The abstract user interface model comprises the user interface represented in terms of abstract interaction artifacts [36], and irrespective of the platform (it does not focus on the functionality of various types of systems, such as PCs, PDAs or mobile phones) and regardless of the modality (graphical or vocal). This model is very helpful for connecting the spaces of interaction for which widgets are eventually positioned with the objects/data used to execute the tasks in those spaces of interaction [37].

  • The concrete user interface model depicts the user interface, but in this scenario it is comprised of actual objects of interaction [36]. In this context, the representation of the user interface is based on the platform, and it will be the primary model used to create a complete user interface with which the user will eventually communicate.

  • The context model keep all the details that the device will gather or handle about the sense of usage. It contains a user model, a model of its platform, and a model of its environment.

The user model contains certain user features important to the framework (preferences, skills, knowledge). Applying user-modeling techniques [38, 39] to the input data obtained by the sensors and the information already processed, this model is updated. For example, all the activities done by the client in the user interface are documented in the interaction log, so that the machine can use data mining or classification techniques to determine new user information to update the user model.

The platform model comprises the functionality (both hardware and software) of prospective system profiles where the program can be executed. By analyzing the input data from sensors, this model is also updated. For example, if the user increases the screen resolution, the visualization space available for viewing the contents is limited, and so the document structure, or even the contents themselves, is likely to change. Therefore, as the user modifies the screen resolution, the adjustment is identified by a device sensor so that the platform model can be changed accordingly.

The environment model contains the information about the physical world in which contact is performed. The potential amount of information collected from the surroundings would obviously be extremely large, and the developer must therefore determine which information is important because it has an effect on the application’s use. Once more, the data stored in this model is regularly updated by the incoming sensor data. Good relationship management means that all the information contained in the knowledge bases of the previously mentioned models is used efficiently and effectively.

The human-computer interaction utilizes the nature of intelligent user interfaces, leading to the problems that a user interface should still have about access and quality of use [40]. Usability describes the degree to which individual consumers may use a product to accomplish given objectives with performance, effectiveness and satisfaction in a specified sense of use (ISO 9241-11). The metaphors already mentioned, the increasing computing capacity and the modern ways of communicating are encouraging the emergence of new methodologies that encourage the consideration of artifacts of interaction other than those historically considered [41]. User-centered programming requires or at least draws influence from each of these methodologies (UCD). User-Centered Design (UCD) is a strongly organized, systematic approach for product creation driven by: (1) specifically identified market priorities that are task-oriented and (2) awareness of user expectations, constraints and desires.

Advertisement

6. Designing situation-aware decision support systems

An outline of the design process is given in this section (Figure 4). Describing the situation-aware user interface, the design process encourages the layout of declarative abstract models [2].

Figure 4.

Situation-aware Design Process for User Interface.

To export these models to the runtime, the aggregate of the models can be serialized. The corresponding UI can be created in the form of a prototype to check the responsiveness of the device in order to evaluate the outcome of these models. Considering the prototype, certain adjustments may be made to the templates in the modeling process to improve the appearance of the UI, for example, or how changes in the scenario can affect the UI.

Situation-based Task Model: Firstly, a work model is defined that specifies the tasks that users and applications can experience as they communicate with the system. Since we want situation-aware UIs to be created, tasks often focus on the current situation. This is why activities are drawn up for different circumstances in the task model. The designer will specify various roles for different scenarios in this way.

Input Model: The developer needs to identify what kind of input will affect the interaction, i.e. the tasks, when the task model is defined. This can be achieved by choosing the input collection objects (Perception Objects or POs). The aggregation objects (AO) may be aggregated by these objects and described by interpretation objects (IO). The designer will do this by binding AOs to POs and choosing how the information needs to be interpreted from a set of predefined interpretation rules. At the comprehension layer, the IOs reflect the interpreted data. The designer has to connect the IOs to task model nodes when the input model is defined (intermodal connection). The designer may thus denote which activities can be done under which scenario.

Situation-Specific Dialogue Models: Next, for each case, the tool can automatically generate a dialogue model from the task model. After that, inter-model links are automatically inserted between the dialogue model states and the task model tasks that are allowed for each individual state. The nodes of the dialogue model (states) of the different dialogue models are connected to denote which states can alter the situation between them.

Presentation Model: Designers need to compose abstract UI components to provide the interface model with information about how the interaction should be interpreted to the user, and connect these to the relevant tasks for each node of the presentation model. In order to organize presentation components for layout purposes, the presentation model nodes may be arranged hierarchically. There are many abstract UI components for the designer to choose from, such as static, data, option, navigation control, hierarchy, and custom widget. Finally, it is possible to organize the UI components and arrange them into a hierarchical structure.

Situation-aware Interface Model: A situation-aware interface model benefits from the aggregate of all the models.

Usability evaluations: In order to assess and enhance the performance of the graphical interface of the models, usability tests are then conducted.

Advertisement

7. Current challenges and envisioning the future

Today, the development of intelligent user interfaces causes challenges equivalent to those encountered in Artificial Intelligence. It is well known that the optimal user interface is not the actual one in the area of human computer interaction. Currently, however, it is important to provide an intermediate between the wishes of the individual and the application of such intentions. It doesn’t matter how elegant a user interface is, it will still be there to provide the user with a mental workload.

The potential machine, according to A. van Dam or T. Furness [42], would be a great butler who knows my setting, my preferences and my character, and without needing clear orders, gets ahead of my needs in a subtle way. When the user communicates with this butler, movements, facial expressions [43, 44] and other means of human speech, such as designing drafts, will mostly talk about the interaction. A goal of Artificial Intelligence from its very beginning, 50 years ago, is to be able to provide objects that facilitate learning, development or connectivity peer to peer with an individual.

To actually come true, by applying computer vision techniques or speech recognition to perceive and understand natural language, certain agents may be able to translate gestures and emotions. The inference engine for such agents would be Artificial Intelligence and knowledge-based technologies. They would do so in a friendly way when these agents speak with the user, and potentially modulate their voice according to the mood of the user at a given point in time.

The complexities of this technology can be separated into three areas: input, inference and output and, more precisely, the analysis of human language expressions, the depiction and control of knowledge of the world and, ultimately, the perception of human beings as social beings.

In [45], a virtual companion-based interface was proposed by the authors to simplify the mobile interface for the elderly. This interface shows menus through a virtual agent as per the scenario and the request of users and animates information in a 3-dimensional layout. To collect input from users, the authors used speech recognition technologies from smartphones and wearable devices. To imagine suitable actions based on user feedback, virtual avatars were chosen. The authors predict that this mobile interface might be the future of the next user interface for smartphones.

In the treatment and healing process, IUI is often used in tasks of determining the functional status of an individual. The performance of functional state estimation is dependent on combined data from the accelerometer and the EEG [46].

Advertisement

8. Conclusions

Computer products today have the potential to provide information to us, to entertain us or to make our lives easier, but whether the user interface provided is limited or difficult to use, they may also slow down our job. A vision of how various approaches from different fields, including Artificial Intelligence, Software Engineering and Computer-Human Interaction, have been added over the years to help establish a successful user interaction experience, increasing the overall usability of the device, has been seen in this chapter. In [47] the authors explore the community’s evolving tacit viewpoint on intelligence characteristics in intelligent user interfaces and provide suggestions for expressing one’s own perception of intelligence more clearly.

User interfaces can be as normal in the future as listening to a human. We should also note that Human-Computer Interaction’s main aim is to reduce the mental distance between the user and his activities, dimming the interface until it becomes unseen. None the less, to get to the stage where we can provide users with virtual interfaces, there is always a long way to go. Computer-based automated interpretation of human sentiment and effect is generally predicted to play a significant role in future Intelligent User Interfaces, as it carries the potential of supplying immersive applications with emotional intelligence [48].

Another key of the study is that adding context to input in situation-aware systems results in automatic adjustment in awareness of the situation and action list, making the UI adapt to the actual operator’s unique needs. In addition, adaptation can only actually happen where the context and climate adjustment is sufficiently important to result in the transition between two possible user interface statuses. Adapting the interface to the actual scenario as defined in this prototype and providing reusable tasks with a reduced number of commands, clicks and choices decreases the operator’s cognitive burden and hence encourages interactions [49].

References

  1. 1. ISO 9241-11. Ergonomic requirements for office work with visual display terminals (VDTs) - Part 11: Guidance on usability, 1998
  2. 2. NWIABU, N., ALLISON, I., HOLT, P., LOWIT, P. and OYENEYIN, B., 2012. User interface design for situation-aware decision support systems. In: IEEE Multi-disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support. 6-8 March 2012. Piscataway, New Jersey: IEEE. Pp. 332-339
  3. 3. Pavel Andreevich Samsonov. 2016. Improving Interactions with Spatial Context-aware Services. In Companion Publication of the 21st International Conference on Intelligent User Interfaces (IUI '16 Companion). Association for Computing Machinery, New York, NY, USA, 114-117. DOI:https://doi.org/10.1145/2876456.2876459
  4. 4. Shumin Zhai. 2017. Modern Touchscreen Keyboards as Intelligent User Interfaces: A Research Review. In Proceedings of the 22nd International Conference on Intelligent User Interfaces (IUI '17). Association for Computing Machinery, New York, NY, USA, 1-2. DOI:https://doi.org/10.1145/3025171.3026367
  5. 5. Philip Quinn and Shumin Zhai. 2016. A Cost-Benefit Study of Text Entry Suggestion Interaction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16)
  6. 6. Shiri Azenkot and Shumin Zhai, Touch Behavior with Different Postures on Soft Smart Phone Keyboards, In Proceedings of MobileHCI (MobileHCI '12). ACM, New York, NY, USA, 251--260
  7. 7. Xiaojun Bi, Yang Li, and Shumin Zhai. 2013. FFitts law: modeling finger touch with Fitts' law. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13)
  8. 8. Andrew Fowler, Kurt Partridge, Ciprian Chelba, Xiaojun Bi, Tom Ouyang, and Shumin Zhai. 2015. Effects of Language Modeling and its Personalization on Touchscreen Typing Performance. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15)
  9. 9. Per-Ola Kristensson, Shumin Zhai, SHARK2: a large vocabulary shorthand writing system for pen-based computers, Proceedings of the 17th annual ACM symposium on User interface software and technology (UIST 2014)
  10. 10. Philip Quinn & Shumin Zhai, Modeling Gesture-Typing Movements, Human-Computer Interaction, Pages 1--47, Published online: 26 Aug 2016
  11. 11. Shumin Zhai and Per Ola Kristensson. 2012. The word-gesture keyboard: reimagining keyboard interaction. Commun. ACM 55, 9 (September 2012), 91--10
  12. 12. Andrew Fowler, Kurt Partridge, Ciprian Chelba, Xiaojun Bi, Tom Ouyang, and Shumin Zhai. 2015. Effects of Language Modeling and its Personalization on Touchscreen Typing Performance. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15)
  13. 13. Mitchell Gordon, Tom Ouyang, and Shumin Zhai. 2016. WatchWriter: Tap and Gesture Typing on a Smartwatch Miniature Keyboard with Statistical Decoding. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16)
  14. 14. Per-Ola Kristensson, Shumin Zhai, SHARK2: a large vocabulary shorthand writing system for pen-based computers, Proceedings of the 17th annual ACM symposium on User interface software and technology (UIST 2014)
  15. 15. Zhai, S., Kristensson, P.O., Gong, P., Greiner, M., Peng, S., Liu, L. Dunnigan, A., ACM CHI 2009 Conference on Human Factors in Computing Systems Extended Abstracts. pp. 2667--2670
  16. 16. Shumin Zhai and Per Ola Kristensson. 2012. The word-gesture keyboard: reimagining keyboard interaction. Commun. ACM 55, 9 (September 2012), 91--10
  17. 17. Shumin Zhai and Per-Ola Kristensson. 2003. Shorthand writing on stylus keyboard. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '03)
  18. 18. Jianlong Zhou, Jinjun Sun, Fang Chen, Yang Wang, Ronnie Taib, Ahmad Khawaji, and Zhidong Li. 2015. Measurable Decision Making with GSR and Pupillary Analysis for Intelligent User Interface. ACM Trans. Comput.-Hum. Interact. 21, 6, Article 33 (January 2015), 23 pages. DOI:https://doi.org/10.1145/2687924
  19. 19. Paulo Salem. 2017. User Interface Optimization using Genetic Programming with an Application to Landing Pages. Proc. ACM Hum.-Comput. Interact. 1, EICS, Article 13 (June 2017), 17 pages. DOI:https://doi.org/10.1145/3099583
  20. 20. D. Te’eni, J. Carey and P. Zhang, Human Computer Interaction: Developing Effective Organizational Information Systems, John Wiley & Sons, Hoboken (2007)
  21. 21. B. Shneiderman and C. Plaisant, Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th edition), Pearson/Addison-Wesley, Boston (2004)
  22. 22. J. Nielsen, Usability Engineering, Morgan Kaufman, San Francisco (1994)
  23. 23. D. Te’eni, “Designs that fit: an overview of fit conceptualization in HCI”, in P. Zhang and D. Galletta (eds), Human-Computer Interaction and Management Information Systems: Foundations, M.E. Sharpe, Armonk (2006)
  24. 24. Jaquero V.L., Montero F., Molina J., Gonz´lez P. (2009) Intelligent User Interfaces: Past, Present and Future. In: Redondo M., Bravo C., Ortega M. (eds) Engineering the User Interface. Springer, London. https://doi.org/10.1007/978-1-84800-136-7_18
  25. 25. Maybury, M. Intelligent user interfaces: an introduction. In Proceedings of the 4th international Conference on intelligent User interfaces (Los Angeles, California, United States, January 05 - 08, 1999). IUI '99. ACM Press, New York, NY, 3-4, 1999
  26. 26. López Jaquero, V., Montero, F., Molina, J.P., González, P. Fernández Caballero, A. A Seamless Development Process of Adaptive User Interfaces Explicitly Based on Usability Properties. Proc. of 9th IFIP Working Conference on Engineering for Human-Computer Interaction jointly with 11th Int. Workshop on Design, Specification, and Verification of Interactive Systems EHCI-DSVIS’2004 (Hamburg, July 11-13, 2004). LNCS, Vol. 3425, Springer-Verlag, Berlin, Germany, 2005
  27. 27. Paternò, F. Model-Based Design and Evaluation of Interactive Applications. Springer. 1999
  28. 28. Wahlster, W., Maybury, M. An Introduction to Intelligent User Interfaces. In: RUIU, San Francisco: Morgan Kaufmann, 1998, pp. 1- 13
  29. 29. G. Riva, F. Vatalaro, F. Davide and M. Alaniz, Ambient Intelligence: The Evolution of Technology, Communication and Cognition towards the Future of HCI, IOS Press, Fairfax (2005)
  30. 30. Puerta, A.R. A Model-Based Interface Development Environment. IEEE Software, 1997
  31. 31. Eisenstein, J., Vanderdonckt, J., Puerta, A. Applying model-based techniques to the development of UIs for mobile computers. Intelligent User Interfaces 2001: 69-76
  32. 32. Maybury, M. Intelligent user interfaces: an introduction. In Proceedings of the 4th international Conference on intelligent User interfaces (Los Angeles, California, United States, January 05 - 08, 1999). IUI '99. ACM Press, New York, NY, 3-4, 1999
  33. 33. Puerta, A., Micheletti, M., Mak, A. The UI pilot: a model-based tool to guide early interface design Full text. Proceedings of the 10th international conference on Intelligent user interfaces. San Diego, California, USA. 215 –222, 2005
  34. 34. Lang, M., Fitzgerald, B. Hypermedia Systems Development Practices: A Survey IEEE Software. Volume 22 , Issue 2 (March 2005). 68 – 75. 2005. IEEE Computer Society Press Los Alamitos, CA, USA
  35. 35. Paternò, F. Model-Based Design and Evaluation of Interactive Applications. Springer. 1999
  36. 36. Vanderdonckt, J., Bodart, F. Encapsulating Knowledge for Intelligent Automatic Interaction Objects Selection. In ACM Proc. of the Conf. On Human Factors in Computing Systems INTERCHI'93 (Amsterdam, 24-29 April 1993), S. Ashlund, K. Mullet, A. Henderson, E. Hollnagel & T. White (Eds.), ACM Press, New York, 1993
  37. 37. López Jaquero, V., Montero, F., Molina, J.P., Fernández-Caballero, A., González, P. Model-Based Design of Adaptive User Interfaces through Connectors. Design, Specification and Verification of Interactive Systems 2003, DSV-IS 2003. In DSV-IS 2003: Issues in Designing New-generation Interactive Systems Proceedings of the Tenth Workshop on the Design, Specification and Verification of Interactive Systems. J.A. Jorge, N.J. Nunes, J. F. Cunha (Eds). Springer Verlag, LNCS 2844, 2003. Madeira, Portugal June 4-6, 2003
  38. 38. Fischer, G. User Modeling in Human-Computer Interaction. User Modeling and User-Adapted Interaction, Kluwer Academic Publishers, 2000
  39. 39. Horvitz, E., Breese, J., Heckerman, D., Hovel, D., Rommelse, K. The Lumiere Project: Bayesian User Modeling for Inferring the Goals and Needs of Software Users. Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, Madison, WI, July 1998, Morgan Kaufmann: San Francisco, pp. 256-265
  40. 40. Bevan, N. Quality in Use: Meeting user needs for quality. Journal of Systems and Software 49(1): 89-96, 1999
  41. 41. Molina, J. P., García, A.S., López Jaquero, V. and González, P. Developing VR applications: the TRES-D methodology. First International Workshop on Methods and Tools for Designing VR applications (MeTo-VR), in conjunction with 11th International Conference on Virtual Systems and Multimedia (VSMM'05), Ghent, Belgium, 2005
  42. 42. Earnshaw, R., Guedj, R., van Dam, A. and Vince, J. Frontiers in Human-Centred Computing, Online Communities and Virtual Environment. Springer-Verlag, 2001
  43. 43. Picard, R. Affective Computing, MIT Press, 1997
  44. 44. Prendinger, H. and Ishizuka, M., 2005. The Empathic Companion: A Character-Based Interface that Addresses Users’ Affective States. In Applied Artificial Intelligence, Vol. 19, pp. 267-285
  45. 45. S. Noh, J. Han, J. Jo and A. Choi, "Virtual Companion Based Mobile User Interface: An Intelligent and Simplified Mobile User Interface for the Elderly Users," 2017 International Symposium on Ubiquitous Virtual Reality (ISUVR), Nara, 2017, pp. 8-9, doi: 10.1109/ISUVR.2017.10
  46. 46. A. M. Syskov, V. I. Borisov and V. S. Kublanov, "Intelligent multimodal user interface for telemedicine application," 2017 25th Telecommunication Forum (TELFOR), Belgrade, 2017, pp. 1-4, doi: 10.1109/TELFOR.2017.8249439
  47. 47. Sarah Theres Völkel, Christina Schneegass, Malin Eiband, and Daniel Buschek. 2020. What is "intelligent" in intelligent user interfaces? a meta-analysis of 25 years of IUI. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI '20). Association for Computing Machinery, New York, NY, USA, 477-487. DOI:https://doi.org/10.1145/3377325.3377500
  48. 48. Björn W. Schuller. 2015. Modelling User Affect and Sentiment in Intelligent User Interfaces: A Tutorial Overview. In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI '15). Association for Computing Machinery, New York, NY, USA, 443-446. DOI:https://doi.org/10.1145/2678025.2716265
  49. 49. Karray, Fakhri & Alemzadeh, Milad & Saleh, Jamil & Arab, Mo Nours. (2008). Human-Computer Interaction: Overview on State of the Art. International Journal on Smart Sensing and Intelligent Systems. 1. 137-159. 10.21307/ijssis-2017-283

Written By

Nauman Jalil

Submitted: 18 January 2021 Reviewed: 19 April 2021 Published: 15 June 2021