4 Supporting Inclusive Design of Mobile Devices with a Context Model

The aim of inclusive product design is to successfully integrate a broad range of diverse human factors in the product development process with the intention of making products accessible to and usable by the largest possible group of users (Kirisci, Thoben et al. 2011). However, the main barriers for adopting inclusive product design include technical complexity, lack of time, lack of knowledge and techniques, and lack of guidelines (Goodman, Dong et al. 2006), (Kirisci, Klein et al. 2011). Although manufacturers of consumer products are nowadays more likely to invest efforts in user studies, consumer products in general only nominally fulfill, if at all, the accessibility requirements of as many users as they potentially could. The main reason is that any user-centered design prototyping or testing aiming to incorporate real user input, is often done at a rather late stage of the product development process. Thus, the more progressed a product design has evolved the more time-consuming and costly it will be to alter the design (Zitkus, Langdon et al. 2011). This is increasingly the case for contemporary mobile devices such as mobile phones or remote controls.


Introduction
The aim of inclusive product design is to successfully integrate a broad range of diverse human factors in the product development process with the intention of making products accessible to and usable by the largest possible group of users (Kirisci, Thoben et al. 2011). However, the main barriers for adopting inclusive product design include technical complexity, lack of time, lack of knowledge and techniques, and lack of guidelines (Goodman, Dong et al. 2006), (Kirisci, Klein et al. 2011). Although manufacturers of consumer products are nowadays more likely to invest efforts in user studies, consumer products in general only nominally fulfill, if at all, the accessibility requirements of as many users as they potentially could. The main reason is that any user-centered design prototyping or testing aiming to incorporate real user input, is often done at a rather late stage of the product development process. Thus, the more progressed a product design has evolved -the more time-consuming and costly it will be to alter the design (Zitkus, Langdon et al. 2011). This is increasingly the case for contemporary mobile devices such as mobile phones or remote controls.
The number of functions and features on these products requiring user attention and interaction has increased significantly as illustrated in Fig. 1. Thus, the impact on end users with mild-to-moderate physical or sensory impairments is that they often have difficulties when interacting with such kind of products. These difficulties could be anticipated and avoided if acknowledged earlier in the development process.
Additionally, typical use-cases for interacting with mobile devices include a wide range of environments which they may be used in. The mobile phone in order to cope with these 66 use cases, must be 'dynamic' or responsive. Dynamic means the device will have the ability to cope with a changing environment and user requirements during the completion of any given activity: e.g. writing a text message on a mobile phone while walking from an indoor into an outdoor environment, or dealing with changes in noise, light, glare, etc. The impact of factors such as location, mobility, and social and physical environments, further increases the level of comprehension and attention needed to operate the device. Accordingly, the features and capabilities of the mobile device should take the context of use into account, and appropriately support and facilitate ease-of-use. With this wide range of possible situations, the features and capabilities of the mobile device should be in line with the needs of the user, in order to support device interaction in a responsive way. Often it is difficult for the product manufacturers and designers to easily implement such contextual support into existing functions of mobile devices, even if they have awareness of the varieties of context of use as well as an understanding of the requirements of user groups with impairments (e.g. physical, vision, hearing or dexterity). And even though a wide variety of tools and methods exist to support e.g. user-centered design in general, they often fail to implement those needs into the user interfaces of the products. As such, a majority of existing consumer products only partially fulfil the accessibility requirements of impaired users. The reason for this situation is relatively easy to explain: as there is a lack of supportive functions in design tools such as Computer Aided Design applications, which would promote contextrelated design of products by default and enable the designer to understand where there is an error in their design and how to fix it.

Challenge
A major challenge lies in defining an appropriate technique which can secure inclusive design of consumer products while integrating with existing design tools and frameworks. If this technique is based upon the use of an "inclusive model", then there is a need for a well-defined context model which incorporates all aspects related to the user, the environment and her/his intended interactions. From the research perspective the challenge may be seen as elaborating an advanced context model which is valid for settings where www.intechopen.com Supporting Inclusive Design of Mobile Devices with a Context Model 67 mobile devices are typically used, and can be consulted for specifying, analysing and evaluating mobile devices such as mobile phones as well as other mobile devices with similar interaction paradigms. Addressing these challenges, this chapter explores the potential of model-based semantic reasoning support for achieving inclusive design of user interfaces of mobile consumer products. A sophisticated context model which is considered and called "Virtual User Model" has been developed for this purpose, and represents the main contextual aspects of potential user groups -namely their profiles, tasks and environments. It shall be demonstrated, that through the usage of logical rules and constraints (which are based upon expert knowledge gained from an observational study), and semantic reasoning techniques (i.e. semantic reasoning engine), the conceptual design of a mobile phone (as representing mobile consumer products) can qualitatively and quantitatively be supported by providing easy to use add-on modules to common design/development applications.

Related work
This section provides an overview of related work regarding methods and tools, which are from the view of the authors suitable for inclusive product development and especially applied in the conceptual phases of product development: sketch, design, and evaluation. Most tools and methods introduced in this section are based upon the usage of some kind of conceptual models. Due to the near affinity to model-based approaches, an overview of related work regarding the usage of Virtual User Models along the product development process is initially presented here.

Virtual user models
One promising practice for realizing inclusive product design is to employ virtual user models (VUM) (Kirisci, Thoben et al. 2011). Virtual user models have the potential of complementing real tests with real users in the early design stages, while a VUM can be seen as an abstract representation of a human's behaviour (Kirisci, Klein et al. 2011). Virtual User Models are three-dimensional, model-like images or avatars and usually contain the following functions: (1) Human body modelling and analysis (2) Animation of virtual users (3) Interaction of virtual users with virtual objects (VICON-Consortium 2010). Nowadays virtual user models are widely used for ergonomic analysis in vehicle and workplace design within the automotive or aerospace industry. They are used to validate the design in a simulation environment, check in an iterative loop if the design is suitable, refine it considering recommendations and best practices and finally, when found suitable produce a prototype to be checked by end users as shown in Fig. 2. For the sketch phase static models of the user are plied, while during the design phase virtual user models of humans can have the notion of three-dimensional human models. The usage of virtual user models for a continuous support of sketch, design and evaluation phases can be considered as unique (Kirisci, Klein et al. 2011). Thus, contemporary approaches where virtual user models are utilized are only partially suitable for inclusive design. With respect to this background it is of particular interest to explore how virtual user models are capable of complementing the involvement of real users within the early product development phases.

Methods and tools supporting the sketch phase
For the sketch phase a mixture of qualitative and quantitative methods are common such as user studies, field and user trials, market studies or interviews. Since the initial product development steps are often characterized by loose creativity, innovation and a need to react flexibly to changing requirements, software tools that provide inclusive design support in the sketch phase are rarely in use (Kirisci, Thoben et al. 2011). Moreover, information technology is limited to a more indirect role, such as providing structures for a quick idea exchange as in Mind-Maps or the preparation of sketches and initial drawings like in graphic design software such as ADOBE Illustrator. On the other hand, for idea generation creative techniques like brainstorming or 635 method are well known and used in product development. Next to creative techniques, guidelines and checklists are also well-established supportive tools for the early product development phases such as within the sketch phase, and are nowadays in use in industry. Some of the most notorious guidelines are the ISO guidelines such as ISO 13407 (Human-centred design processes for interactive systems) or ISO 9241 (Ergonomics of human-system interaction). The drawback of guidelines is however that they fail to consider all product possibilities and features (Zitkus, Langdon et al. 2011). These kinds of guidelines are usually very general and rarely of quantitative nature. Moreover, they are often described in descriptive texts and tables which is not very much in line with preferences of product developers for the presentation or visualisation of design recommendations. Furthermore, the input to the sketch and design phases may come from several internal and external sources like customers, competitors, marketing and research and production (Pahl, Beitz et. al. 2005), (Westkämper 2005). Methods where the integration of end users within the development process is envisaged are often referred to as "user-centered design techniques". In this respect, user trials and observations are the more well-established techniques which are applied by some product manufacturers in the early product development phases. Although the benefit of these techniques is evident up to a certain extent, the time involved to organise the trials or observations, as well as the time to recruit and select a representative sample, negatively impact the design process. An alternative technique which has less impact upon time and budget constraints of the product developers is "self observation". Self observation is a frequently used technique by product designers in order to test the usability and accessibility through product mock-ups (Zitkus, Langdon et al. 2011). A disadvantage of this technique is evident when the product tests are done by the same persons who are involved in the development of the product. This means when the testers are too acquainted with the product, their judgement about the usability and accessibility might become subjective. It should also be noted that when the designers involved in the testing process do not have physical impairments, it is impossible to experience the same capability and interaction demand that an impaired user group would experience. In order to compensate this drawback, wearable suits such as the 'Third-Age Suit', 'Age Explorer' and 'Simulation Toolkit' have been applied by designers in order to experience certain physical limitations while interacting with products. Due to the substantial physical effort and amount of time involved in wearing these solutions, product developers tend not to wear the suits continuously during the design process (Zitkus, Langdon et al. 2011).
From the academic domain, especially in HCI-Human Computer Interaction, a variety of methods and tools exist for designing software user interfaces, under the consideration of specific end user needs. These tools are often referred to as tools for "user-centered design". In spite of the vast amount of research conducted in this area, only limited efforts have been spent so far in advancing methods and tools for designing physical or mechanical interaction components of consumer products in an inclusive manner (Kirisci and Thoben 2009). An approach with special focus on inclusive design is the "Inclusive Design Toolkit", which was developed by the Engineering Design Centre of the University of Cambridge (Clarkson 2007). The toolkit can be considered as an online repository of inclusive design knowledge and interactive resources, proposing inclusive design procedures and inclusive design tools which designers may consult to accompany them through their product development process. For supporting the sketch phase the tool provides general design guidance recommendations. In order to explore the capability loss related to some impairments and their severity, the toolkit offers a tool called "Exclusion Calculator". The tool calculated an estimate of the overall exclusion or the exclusion based on each capability demand. Although the Inclusive Design Toolkit raises the designers understanding about the way different disabilities affect the user perception and thus, their interaction with a product, the methodology is strongly dependent upon comprehensive input from the product developer. As emphasized in (Zitkus, Langdon et al. 2011), the exclusion calculation is based on designers' selections of specific tasks, thus the designer's assumptions have a risk of not being accurate, which can drive to incorrect assessments.
When focusing strictly upon the design of specific physical interaction elements of products (e.g. dials, switches, keys, displays, etc.), only two model-based design methods could be identified by the authors, which are capable of supporting the sketch phase of a product, and accommodating the accessibility needs of users. These two methods are based upon the configuration of pre-defined user models which have synergies with the context used for existing virtual user models. Although the methods apply to the design of wearable computing systems, some of the mechanical components are also up to a certain extent relevant to technically more advanced consumer products (e.g. mobile phones, etc.).
In the first approach, a "mobile and wearable computer aided engineering system" (m/w CAE System) was proposed by Bürgy (Bürgy and Garett 2002). By defining a constraint model, typical usage scenarios are described where support for a primary task is needed. Based on this model, existing comparable solutions are identified or new adapted systems are drafted. The description of an exemplary scenario is realized by using elements of four sub models: (a) the user model, (b) device model, (c) environment model, and (d) application model.
The other approach for a design support regarding mobile hardware components was published by Klug in 2007 (Klug andMühlhäuser 2007). The approach focuses on the documentation and communication of specific use cases. Shortcomings in these fields lead to misunderstandings and false assumptions which will produce many subsequent errors during the design process. This challenge is met by the definition of models allowing a correct representation of typical scenarios where wearable computers are applied to enable systematic documentation of use cases. These models consist of: a work situation model, a user model, and a computer system model. The goal is to make the representation comprehensible for the whole design team and thus enabling the interdisciplinary communication between the members from different backgrounds. The author points this characteristic out to be of outstanding importance on the way to the design of an optimal wearable device for a given scenario. Due to the intense and specific design to a certain type of use case, the approach does not easily adapt to other scenarios. The work aims to describe use cases in a very fine granularity, which makes it suitable for well-defined, recurring tasks in a fixed, well-known environment. Use cases with changing environments and slightly unpredictable tasks cannot be described on such a high level of detail without limiting the flexibility, necessary to cope with dynamic change.

Methods and tools supporting the design phase (CAD phase)
Regarding the design phase of products, the authors refer explicitly to the phase of conceptualizing the product model, where most of the development tasks and sub-tasks are typically supported by so called Computer Aided Technologies (CAx), while "x" stands for a bunch of applications (Mühlstedt, Kaußler et al. 2008). Since commonly Computer-Aided-Design (CAD) tools are used for this purpose, the related design phase can be called "CAD phase". Originally CAD focused on the preparation of technical drawings, nowadays nearly all systems provide 3D models of parts and assembled products based on true to life parameters. In order to cope with the needs to achieve inclusive design of a product, several CAx applications provide virtual user models within their portfolio. The most widely used applications in companies include Pro/Engineer, CATIA, and Solidworks (VICON-Consortium 2010). High-End CAD systems, such as those mentioned, are extensible through digital human model plugins, such as Manekin, Ramsis, Jack, or Human Builder, to name just a few. However, the purpose of these plugins usually addresses ergonomic aspects such as validating the usability of user interface components in products (operating a machine, interacting with an aircraft cockpit, or space analysis in a car). In the scope of providing a design support for the inclusive product design, it should be noted that contemporary virtual user models often only have a limited ability to represent specific user groups such as users with certain physical impairments (Mühlstedt, Kaußler et al. 2008). This means that physical impairments are not sufficiently incorporated. In this respect the simulation of human fine motor skills such as movement of single fingers and joints (e.g. for interacting with the keys of a mobile phone) exceeds the capabilities of most virtual user models available today. It should also be noted that simulation is performed according to the designers' assumptions, which are dependent upon their design experience and knowledge about the end user groups to be included.

Methods and tools supporting testing and evaluation phase
For ergonomic analysis, testing and evaluation of product designs, tools incorporating virtual user models are available mainly in the area of product lifecycle management e.g. Tecnomatix (Siemens/ UGS) or Siemens NX, some of which were already mentioned above (Demirel and Duffy 2007). These tools are used for testing via simulation by building Virtual Reality (VR) environments to illustrate novel technology, and let user representatives evaluate the concepts by watching the VR simulation and interacting with it. Up to a certain degree this helps to get early user feedback, long before real prototypes are available. This approach is used to let user representatives have an immersive VR-based 3D experience of a future system. One example for this kind of approach using an Immersive Simulation Platform was conducted in the European Co-funded project VAALID. In this solution a user immerses in a virtual environment, allowing the user to experience some situational aspects of the virtual environment (Schäfer, Machate et al. 2010). A shortcoming of this approach from the point of view of the authors of this paper is that the evaluation of a product is dependent upon the participation of real users. From a technical point of view the system cannot be integrated with e.g. existing CAx applications, and needs a very powerful computing environment in order to take advantage of the full simulation capabilities of the system. A similar VR approach was used as a way to collect user feedback throughout the design of wearable computing IT to support fire-fighters (Klann, Ramirez et al. 2006), (Klann 2007). Here virtual and pervasive prototyping was used to test and design supportive technologies (ubiquitous and wearable technology) for the very specific domain of firefighting. Instead of elaborate user models, simple ones were used in conjunction with strong user participation in simulation sessions.
In the military domain, there exists Virtual User Model called SANTOS, which was developed in the frame of the Virtual Soldier Research Program of the University of Iowa (Zheng, Xue et al. 2010). It uses accurate biomechanics with models of muscles, deformable skin and the simulation of vital signs. With this system analyses of fatigue, discomfort, force or strength can be done. Furthermore modules for clothing simulation, artificial intelligence and virtual reality integration are available for real-time systems. However, a smooth integration into product development software is not possible. Some other models like the Boeing Human Modelling System (BHMS) or the System for Aiding Man-Machine Interaction Evaluation (SAMMIE) complete this listing (Sundin and Örtengren 2006).
Beside simulation, VUMs can be used to detect accessibility and usability problems of human interface designs. In the area of accessibility only one case study was identified, which is HADRIAN. HADRIAN provides an extension of the CAD Software SAMMIE CAD. HADRIAN was especially developed to study tasks for elderly and disabled people. The system includes a database drawing from an anthropometric survey with 100 individuals with a wide range of abilities (Marshall, Case et al. 2004). The aim pursued in this approach is to detect accessibility issues during the interaction between users and ATM (automatic teller machines) machines (Summerskill, Marshall et al. 2009). One of the disadvantages is that the digital human models in HADRIAN are based on a series of movements and forces that are not the maximum, but, instead the comfortable range for each specific task under analysis (Porter, Case et al. 2004). Another disadvantage of this system is similar to the ones encountered with other Virtual User Models as described above, namely regarding the limits of simulation of human fine motor skills (VICON-Consortium 2010). It is also worth mentioning that several European Union co-funded projects such as VICON, VERITAS, GUIDE, and MyUI are currently working on defining a common Virtual User Model which incorporates a wide range of disabilities and physical impairments of users, addressing the most frequent accessibility issues when interacting with products.

The design approach
Context represents on a universal scale, the relevant aspects of the situations of the user groups (Hull 1997). Hence, a context model describes the characteristics, features, and behaviour of a specific user group. Complementarily it also includes the aspects related to the tasks, interactions, user interface, and the environment, where she or he interacts with consumer products (Kirisci, Klein et al. 2011). Accordingly, the context model as proposed in this paper possesses different facets for supporting the development process. Likewise, a virtual user model is an abstract representation of an envisaged user group which complementarily involves a description of the underlying context. Therefore it is legitimate to consider the envisaged context model a "virtual user model". Fig. 3 provides an overview of the underlying concept -at first introduced in (Kirisci, Klein et al. 2011) and (Mohamad, Velasco et al. 2011) -emphasizing the interplay between the virtual and real world.
The data for the context model (referred to as virtual user model in Fig. 3) is based upon (1) accessibility needs of the envisaged end user groups, (2) the user profile, (3) the tasks of the users, and (4) the environment in which the users are interacting. The context model interacts with the sketch, design and evaluation phase of a product in providing qualitative and quantitative design recommendations and constraints. From the point of view of a designer, in the initial sketch phase, a support appears in form of text-based recommendations with respect to potential user interface elements. Up to this point, the recommending character of the context model can be compared to an expert system as defined in (Castillo, Gutiérrez et al. 1997). However, expert systems are usually highly domain specific, thus are not easily adaptable to other domains. Next to the feature that the context model should be easily adaptable to other contexts by the designer, the design concept goes beyond the provision of recommendations. In the design phase the context model will guide the designer with templates and design patterns for interaction components of consumer products. For the evaluation phase, a 3D virtual character in a virtual environment will be established in order to evaluate a developed product design against predefined usage scenarios. After several iterative development cycles, the results are then used for realization of a physical prototype and ultimately a final product. Results that have an impact upon the context model are fed back into the model. This way it is ensured that the context model is extendable and continuously kept updated with contemporary design knowledge.

Prerequisites for creating the context model
One of the prerequisites for creating an appropriate context model is to identify and record key usability issues. Those should then be presented to the designers in a usable and easily adaptable manner, rather than overburdening them with lengthy guidelines (Kirisci, Thoben et al. 2011). Moreover, it should be the aim to provide filtered knowledge with respect to a specific context of use. In order to support the above objective, an observational user study has been conducted with the aim of identifying and describing key usability issues for users with mild-to-moderate impairments. The procedure and results of the study have been published comprehensively in (Fiddian, Bowden et al. 2011). The observational study was not only meant to record the impairments per user, but moreover to understand the relevance regarding the performance of certain tasks in a specific environment. For instance, the impact of a mild to moderate vision impairment of a user will be strongly affected by e.g. light conditions in the environment. Keeping this in mind, it should indirectly impact the functionalities, design and capabilities of a mobile device.
To identify and describe key usability issues that people encounter when using specific consumer product types focus was upon people with one of three common impairments and people showing combinations of these impairments. One group was of users with one minor developed impairment such as vision impairment, hearing impairment, and manual dexterity impairment. Another one was of elderly users with age related impairmentsusually a combination of mild to medium of the previously mentioned impairments. These can also be referred to as multiple sensory impairments. The levels of impairment severity covered by this research were mild to medium (as opposed to severe or profound) and these were determined for each participant during the research process. The mentioned impairments were chosen because of the commonality of the afflictions, the effect it has on using consumer products (touch, sight and hearing, are the primary senses used when interacting with an object). Besides mobile phones, the consumer products that were investigated in this study in detail were white goods such as washing machines. The research involved carrying out detailed observational studies of the participants in their own home. The most important aspect of this research was identifying key problem areas with existing product designs, looking for commonality within and between impairment groups and differing products, and presenting this information accurately and in an accessible and usable format.

Methodology
A detailed ethnographic research was carried out on a group of 58 elderly people from the UK, Ireland and Germany who had a range of mild to medium impairments. Three types of WHO classified impairments were focused upon; hearing loss (B230), sight loss (B210) and manual dexterity (B710/730). The research comprised of a combination of interview and observational techniques and investigated the main usability problems which these specific users encountered when using their washing machine and mobile phone in a typical use environment. The main research methodology employed was detailed observational studies carried out in the participant's own home environment. This methodology was used in the first phase of user testing as the participants have already had sufficient time to use their own products.
The research methodology involved detailed questioning and observation of a relatively small number of participants, 58 in total. The reason for this is that in order to identify the key usability issues a researcher will not only need to ask the opinion of the participant but also to observe where problems occur, record events and encourage greater feedback from the user.
It was considered important that the research should be carried out in suitable environments. When using a mobile phone the environment can have a considerable impact on the usability of a product. For practical reasons it was decided that users should carry out tasks using their own mobile phone in their normal domestic environment. However, whenever possible it was suggested that the user was observed using the phone in both low and high lighting conditions. Additionally the users were observed using the product in both static and mobile environments, so the users were encouraged to use their mobile phone both indoors and outside.
The researcher directed the participant to carry out specific tasks related to the everyday use of the products, made objective observations and asked relevant questions. This procedure followed a standard questionnaire/methodology formulated before and during the pilot research. Furthermore users were asked how easy/difficult they found the task. This had to be explored in detail, including talking through the process, if there were particular problems or if it was deemed relevant. The observer needed to investigate how much each usability issue was down to the specific impairment/s of that user as opposed to being more specific to product design or environmental factors. Observations were recorded in written, abbreviated form.
In the second phase of user testing, the participants had been asked to evaluate a set of unfamiliar products, which helped the researchers to identify issues relating to first time use, for example, how intuitive the product is and how useful the instructions are.

Results of the study
With the mobile phone, many users (n = 15) reported having difficulties using the On/Off control. One issue was related to the force required to press the button. Many users (n = 10) had difficulty with this task either as a result of having to use too much force or because they experienced pain or discomfort. 1 user reported leaving the phone on continuously, to avoid the difficulty of turning it on and off, as she had arthritis. In the observations 12 people reported that the button required force to operate, so this was obviously a significant problem.
Eight participants had problems when making a voice call and all of these problems were related to the operation of the number keys and other controls. Three users had problems due to the number keys being too close together, so they often pushed more than one button at the same time.
Other individual problems included buttons being too small and fiddly, buttons being difficult to operate if the user has long finger nails, problems deleting incorrect numbers, the numbers on the keys being hard to read and force being required to operate the keys. No users reported having problems when receiving a voice call. Observations recorded few negatives but in 3 cases the ring was too quiet and 1 user had problems with the keypad lock.
There were many positive observations made including; loud ring, strong vibration, easy to know when I am receiving a call and screen lights up. Over half (n =25) of the users used the 'Ring and Vibrate' setting to alert them to incoming calls, but almost as many (n=19) used ring only. 2 users were alerted to calls by ring and light but none chose to be alerted by vibration alone. When asked why they chose a particular alert, 10 users said that that was the way the phone was set up for them -the phone was either already on this setting, or a family member had selected it for them. Other replies generally explained and justified most users chosen method of being alerted to a call.
Of the 49 participants who took part in this research, only 26 (59%) send SMS text messages and answered questions on these tasks. This indicates that although many elderly people now have mobile phones, their primary use is likely to be for occasional voice calls instead of text messaging. 28 participants attempted this task. Similar to the results for receiving a voice call, 50% (n =14) used the 'Ring and Vibrate' setting to alert them to an incoming message and 43% (n=12) used Ring/Tone only.  Many users (n=17) had problems when they were asked if the number keys were large enough for them. 10 users specified that the keys were too small; 7 said that they found the keys too small and fiddly, 2 stated that they tended to push two keys at once and 2 obviously found the keys too small. Other problems recorded include the number keys not being arranged in a straight line, the buttons not protruding enough, having to use fingernails to operate the small controls and a rounded button shape making it too easy for the finger to slip off and press the neighbouring button by mistake.

34%
Some users (n = 10) reported not finding the display easy to read or having problems with it. 3 users (2 from group B and 1 from C) found the displays on their phones too small and 1 of these also disliked having dark grey figures on an orange background. 2 users (from group A and B) found that their phone had an energy saving function which darkened the screen, this happened too quickly for them making the display difficult to read. A single user (group A) commented that the calendar and menu functions looked quite faint. For mobile phones, this was the usability issue which had the most problems associated with it. 21 participants reported having problems when they were asked 'do you understand the icons or descriptions'. 12 users said they were not sure about some of the words or icons used and 3 people did not understand the menu functions and so don't use them. 2 users thought the descriptions were not intuitive and another 2 thought the instructions and language were too complicated. 2 users commented that they liked the clear diagrams (icons) and words. Less than half of the users (n=23) attempted to add contact details to the phonebook, as they had previously tried and were unable to complete the task.
The results of the observational user study were completely exploited for the creation of the user profiles, tasks, environment, and design recommendations used in the context model. As such it was more easily possible to create user model, environment and task instances and variables, including rules and constraints for user profiling and recommendation instances.

The Context model -Description and implementation
This section describes the proposed context model in detail and illustrates how the context model has been implemented.

Description of the context model
The context model should possess the capability to determine recommendations for appropriate interaction components for a consumer product. Therefore, it incorporates welldefined partial models which are logically interrelated with one another in order to determine appropriate recommendations for the designer. Using the results of the observational study introduced in the preceding section, a suitable taxonomy for the context model has been rudimentarily described in (Kirisci, Klein et al. 2011), which consists of the following partial models: User Model, where all information about the potential users of the product is stored. Focus is upon exemplary users with mild to moderate physical impairments. The respective user models are divided into several subgroups (profiles), which are divided into different levels of impairments. Additionally there are mixed profiles describing the group of elderly people who are subject to a mixture of hearing, sight and dexterity impairments.
Component Model describes specific user interface components and adds functionalities to specific instances. E.g. a button can be pressed, so the button consists of the functional attribute of a switch with two states. This model is also used to connect recommendations with components -especially in the CAD phase, where the designer's input is related to a component.

Model for Recommendations,
where guidelines and experience of the designer are stored. These consist of the predicates "Name", "Text", "Summary", Rules, Phases and an Attachment, where e.g. Sketch Phase Template Layers can be stored. A component attribute defines rule sets for the design phase, if a recommendation is related to a specific component or component functionality like "Audio Output".
Environment Model, where all data of the environment is stored. That includes the physical conditions of the environment of the real world, objects and characteristics of the environment etc.
Task Model describes how to perform activities to reach a pre-defined goal. This model may be based e.g. on Hierarchical Task Analysis (HTA) providing an interface, where the designer can define actions of the user for the evaluation in the virtual environment envisaged in the evaluation phase.

Overall architecture
In relation to functional requirements, such as gaining component recommendations as an output, the context model needs to be able to parse the sub-models using logical constraints. This is necessary in order to build an inference model with all relevant data. For the implementation, an architecture is proposed which includes the context model as a knowledge base. Fig. 6 shows the system architecture of the overall system for implementing the context model. It is divided into the parts: the backend, where all data of the context model is stored, the frontend, where company-specific design and testing applications, as well as all client-specific features to obtain recommendations (recommendation module) are integrated. The middleware layer provides a seamlessly accessible connection between the front end applications and the reasoning engine with a socket connection handler and socket server.
In this respect a software framework has been specified upon a reasoning engine, as drafted in Fig. 6.
The proposed architecture provides, the frontend services, where the user can conduct the required functionalities. The recommendation system can be accessed in the sketch phase as well as in the detailed CAD phase. Thus three different types of front-end modules are provided to the user. The middleware services deal with all in-and outgoing connections and provide all relevant data to front-end modules. All recommendations are marked with a phase attribute, which defines at which phase a recommendation will be presented. Additionally every recommendation instance consists of a user model-, environment-, task-or component rule. The backend services provide the access to the ontology-schemes, algorithms and data in order to control and manage the framework. The sketch phase is signed by providing qualitative design recommendations to the designer. These recommendations can be used for drafting the user interfaces of the envisaged product. In order to offer the designer flexibility within the creativity process, only qualitative (high level) design recommendations are offered to the designer in this phase. An idealized work-flow could mean to have corresponding recommendations (such as "use a maximum of 5 buttons in total") directly on screen while drafting the productshape on paper or a digitizer tablet. The designer can save all settings among the given recommendations by saving a status file called "VSF". The VSF is not only to save and reload corresponding information but also to "import" all information into the subsequent modules (e.g. CAD Module).
The idea is that the context model is sequentially established through four main steps, which apply specified rule sets by every step as illustrated in Fig. 7 (Kirisci, Klein et al. 2011):

Applying of user model Rules
The General Rule Reasoner uses the user model rules to define all instances of the user model class as members of specified WHO ICF profiles (e.g. a specific profile for moderate hearing impaired people).

Generation of initial Recommendations
This step is the same as the first step, with the difference to use the recommendation rules and instances based upon user model profiles.

Creation of environment recommendations
This step creates classes which are based on the id names (IDs) of every environment, and adds all textual and component recommendations, which were reasoned by the environment rules, as members of these new recommendation classes (e.g. a recommendation class for an instance of the environment). These rules can also use the previous defined recommendation classes.

Creation of task recommendations
The last step creates all task related recommendations based on task rules and all previously defined recommendations. This procedure is the same as the creation of environment recommendations, all tasks id names define dynamically created classes, which contain recommendations for specific tasks.

The context model in the CAD phase
The user data, which is used in the CAD phase, contains in addition to the user model, environment and task selection of the first phase an annotation for components. The annotation option for each component part is obtained from the component model of the context model as seen in Fig. 6. After, the annotation, the module presents recommendations dedicated to the annotated user interface component. Although it is likely that some recommendations may overlap, the majority of recommendations will be added to those recommendations of the first phase (sketch phase), providing more detailed insights.
As already defined, the inference of the CAD design recommendations is created by using component tags for each recommendation instance. Additionally the instances can also define component rules, which limit minimum parameter values for the annotated objects. The corresponding parameter values must have already been defined in the CAD product to establish a linkage to the recommendations (e.g. a "button_height" parameter must be defined within a CAD model to be manipulable by a corresponding rule).
If a component rule and a corresponding value are defined for the current selected recommendation, an "Apply" button becomes visible in the recommendation view of the CAD module as shown in Fig. 9. The designer can accordingly use the button to check and change minimum parameter values directly within the CAD model.

The evaluation of the context model
In order to highlight the advantages (but also the limits) of the proposed context model for inclusive product development, a real use case of a European mobile phone manufacturer serves as a reference scenario. Technically spoken, this section describes how the context model can be used along the development process of mobile phones by product developers and design teams. Hence the underlying idea is to secure inclusive design aspects -starting as early as in the sketch phase, via the design phase -up to an initial product evaluation. In order to make this approach more comprehensive a typical design scenario is introduced.

Scenario -Development of a mobile phone
A designer is developing a new mobile phone which will be made available on the mainstream mobile phone market. He wants the product to be as accessible and usable as possible to as many people as possible, while at the same time looking attractive and appealing to customers. He sketches a new design idea and uploads it onto the computer. As the designer marks up the sketch he assigns the appropriate labels to the various user interface components. As he does so, design recommendations are provided by the system to warn him well in advance about potential usability issues with each component and to ensure that he addresses these issues at the earliest possible stage. The cost of making changes increases exponentially as the design reaches the later development stages, so Designer B wants to identify and address as many usability issues as possible at this earliest stage.
Once the designer gets into the next phase of the design project, he further develops the design and starts to conduct virtual user tests of the user interface. Since he wants this new mobile phone to be as universally designed as possible, he tests the design with all of the preset virtual user profiles and in a range of virtual environments. He has decided to design a touch screen phone, so most of the buttons and controls are onscreen.

Technical view on the scenario
The designer wants the product to be as accessible and usable as possible to as many people as possible, which means that he must be aware of possible impairments, which will have an impact on the product design. To accomplish this, he first starts with the sketch phase and also the sketch design application of the framework.
The User Model "Gandalf" is the most impaired user model of the framework. As seen in Fig. 10 in the user model information field, the model is a member of the WHO ICF based groups of moderate hearing, manual dexterity and visual impaired profiles. As a result of this selection, the designer receives a list of appropriate recommendations, which all describe problems with impairments of the user model.
After the selection of the user model, the designer can select different environments analogously, so he can get additional recommendations in relation to each selection. Accordingly, if the designer has made all different selections of user model, environment and tasks, he can export the current results of recommendations into the status file (VSF).
In the next phase the designer creates his product in a virtual environment. Fig. 11 shows a product development view of Siemens NX in the design phase, including the annotation module, where the designer defines his components.
After the annotation he can import the VSF into the CAD Module in order to receive the recommendations in line with the previous selections. In the recommendation view of the module he can select his annotated component and the imported VSF and receives all recommendations for his component as highlighted in Fig. 9.

Discussion
The authors of this chapter are convinced that the adoption of tools for inclusive design by product designers is more likely to happen, the less their impact is upon the design process. In other words, the earlier a product meets the user's requirements, the lesser the changes impact the design process, which includes the effect in the project budget, the project plan and the design activity. It is therefore of crucial importance to understand the product development process in detail, in order to know in which phases to integrate with tools that support inclusive design. To establish tools that are able to integrate into the existing design process is therefore one of the key issues to foster acceptance of inclusive design tools in industry. This justifies that the proposed design approach focuses upon the integration of a widely used CAD application such as Siemens NX, thus offers at the same time an effective and undisruptive way to present design recommendations to the product developer. Since the presented findings are based upon ongoing research, it is too early to draw final conclusions about the willingness of product manufacturers to adopt this approach into their existing design processes. Although, a European mobile phone manufacturer who has already vast experience with adopting more traditional user-centered design approaches, is testing the solution within their product design department. Additionally the solution is being evaluated by a major manufacturer of white and brown goods with the development of washing machines and remote controls for TV sets. Throughout these tests, the validity and the limits of the proposed context model shall be identified, with the aim of improving the overall solution to the benefit of the product designers.

www.intechopen.com
Supporting Inclusive Design of Mobile Devices with a Context Model 85 Fig. 11. CAD prototype of a mobile phone and annotation module

Conclusions
The presented design approach based on the described context model is capable of supporting the product development process in the early stage before the realization of prototypes. It should however, not be understood as a total substitution of real users, but moreover as complementing the involvement of real users, thus an opportunity to minimize the effort of applying more costly and time-consuming techniques in the early design phase. The benefit for mainstream manufacturers of mobile products such as mobile phones, gadgets, and remote controls is obvious as they would be able to develop their products in an inclusive manner, making them accessible for users with mild to moderate impairments. At the same time, it is vital that the product remains attractive for non-impaired users as well. A main challenge was to seamlessly integrate the context model into the existing product development processes of manufacturing companies by integration into mainstream CAD applications. This challenge was tackled by integrating the context model in Siemens NX 7.5 (as representing a widely used CAD application), and presenting the designer with qualitative and quantitative recommendations based on the specified values in the CAD software. The next development phase of the proposed system will focus upon the evaluation phase, where the designers will have the possibility to test their recommendation-based product design in a virtual environment through a digital human model, which corresponds to the data of the context model as configured by the designer in the sketch phase. It is foreseen that this shall be done while remaining within the same CAD application Siemens NX. As a digital human model, "JACK" shall be used and adapted according to the collection of profiles of impaired end user in the context model. The proposed design approach envisages that the output of the evaluation phase should flow back into the preceding phases such as CAD phase and sketch phase. In this way a continuous update of the context model shall be realized. The demonstration of this mechanism shall also be considered in the next iteration. Finally, it should be noted that focus of the present research is mainly upon the feasibility of the approach than to guarantee the validity of the data in the context model. Even though a comprehensive field study has been conducted within this research, for the future the quality of the data (recommendations, design constraints, user interface components, etc.) in the context model is highly dependent upon the availability of quantitative, accurate data. At the same time the authors are confident that through the developed design framework, research findings of quantitative user studies can be integrated more easily and cost-efficiently into the early product development.

Acknowledgements
This work has been done under the research project VICON (project No. 248294) funded by the European Commission under the ICT Framework (ICT-2009-7.2). We wish to acknowledge our gratitude and appreciation to all the project partners for their contribution during the development of various ideas and concepts presented in this chapter.