Open access peer-reviewed chapter

Towards Agility and Speed in Enriched UX Evaluation Projects

Written By

Juliana Alvarez, David Brieugne, Pierre-Majorique Léger, Sylvain Sénécal and Marc Frédette

Submitted: 30 August 2019 Reviewed: 17 September 2019 Published: 13 November 2019

DOI: 10.5772/intechopen.89762

From the Edited Volume

Human 4.0 - From Biology to Cybernetic

Edited by Yves Rybarczyk

Chapter metrics overview

663 Chapter Downloads

View Full Metrics

Abstract

Recent research has called for the use of enriched measures, that is, psychophysiological measures of emotional and cognitive states, in user experience (UX) testing. This chapter investigates how these enriched measures can inform user experience evaluation while maintaining agility and speed in managing UX projects. Using a multiple case approach, this chapter presents the analysis of 12 recent user experience projects in which enriched measures were used. Lessons learned with regard to challenges encountered are outlined. They emphasize on: (1) the nature of the research question impacts the completion time and the complexity of the project; (2) the need to communicate and coordinate between all parties; (3) the need to anticipate the collected measurements and enhanced results using a mosaic of hybrid collection methods; (4) the nature of the results adapted to underline the operational side without reducing the quality of the work performed; and (5) the time constrains influenced and influencing the pre-tests and project’s granularity. This chapter concludes with lessons learned from an agile/UX development approach in the realization of Sprint projects.

Keywords

  • user experience (UX) research
  • psychophysiological measures
  • agile development cycle
  • usability testing project
  • case study

1. Introduction

In many industrial fields, high-growth technologies are disrupting traditional models of development in multiple ways. On the one hand, expectations and demands of the market are changing at a faster pace, thus creating a competitive pressure for companies to launch their products at a higher rate than before. On the other hand, consumers, who are getting used to constantly seeking more, increasingly expect these technologies to fulfill their specific needs although the majority struggle to define what they really want and hope to have [1]. Hence, to meet the consumer’s high expectations and needs, organizations are turning toward user experience (UX) research and its possibilities to put forth enriched measures that go beyond the deployment of explicit measurements resulting from interviews, focus groups, and questionnaires [2, 3, 4].

In order to better understand and get an overall picture of user interaction and satisfaction regarding a product, enriched UX measures arising from psychophysiological and neurophysiological data—that is, cognitive and emotional measures from lived experiences—have been proposed in recent research [5, 6]. However, we know from experience that analyzing such measures is time-consuming and complex; hence, they may not always be available promptly to inform product development [7, 8]. As a result, organizations find themselves in a methodological impasse: “There is no time to do thorough usability tests with users between iterations or release cycles, and only testing paper prototypes and doing expert analyses do not provide an accurate picture of the product’s usability” [9]. They are thus expected to synchronize their production at a faster pace by adopting a rapid and efficient cycle of development while understanding the different aspects of the user’s cognitive and emotional interaction.

It is, therefore, important to understand how to facilitate UX-enriched data collection and deployment by integrating an agile approach; a topic of growing interest that has been repeatedly projected in prior research presenting psychophysiological measures [10, 11, 12]. These implicit measures are less sensitive to social desirability and retrospective biases than explicit measures (e.g., self-reported questionnaires). Thus, the triangulation of explicit and implicit measures offers many advantages, such as providing richer and fewer biases in UX measures. This triangulation approach provides clarity on the participants’ lived and perceived experiences [13, 14, 15].

This chapter investigates how these enriched measures can inform user experience evaluation while maintaining agility and speed in UX evaluation projects. Using a multiple case approach, we analyzed 12 recent usability testing projects in which enriched measures were used. It outlines the lessons learned with regard to challenges encountered, the advantages and limitations of using psychophysiological measures in UX evaluation, and the benefits for UX project management practice.

Advertisement

2. Literature review

The agile software development approach and the UX approach might appear to be conflicting, a priori, since they present two distinct ways of allocating resources within a project [16]. The two approaches are founded on different premises. The agile approach focuses on product development, while the UX approach stresses upon the harmonious integration of the object into the user’s life—including the emotional engagement, hedonic appreciation, the values associated to the object, and the technological ecosystem in which the object is used [17]. Proposed iterations on well-defined functional sections of the project may not necessarily provide the same division of test units within the project [18]. The synchronization of activities and practices becomes complex. Indeed, the agile approach proposes a division of the project into working sets to be tested in interaction with the user to ensure their functionality. The UX approach, for its part, proposes a division of the project into needs of the user to be tested to ensure the quality of the specific and global experience of the user. Since there exists such a gap in achievement objectives of both these approaches, their integration will require good communication between all the stakeholders of the project as well as fine-tuning from the early stages of the project.

The agile approach thus provides a development structure to rapidly create products that fulfill the user’s needs, while the UX approach provides the target user with a level of empathy, an element is lacking in the agile approach. In other words, on the one hand, the agile approach allows developers to create products that have value for the user: “Agile development lifecycle is characterized as a series of incremental mini-releases. Each mini-release, with a subset of the features for the whole release, has its own requirements analysis, design, implementation, and quality assurance phases, and is called a working version” [19]. On the other hand, the UX approach leads development teams to create products that are integrated harmoniously into the user’s life and are adapted to them [20].

Nevertheless, both these approaches remain complementary: “Agile projects are highly feedback-driven, yet product teams often rely on user opinion in situations where observation is more appropriate (such as the focus group elicitation strategy described earlier)” [19]. Consequently, the UX approach can greatly improve the agile approach by providing a systematic and scientific way of assessing the needs of target users [18]. Yet, the integration of one approach within the other is complex since, within the UX practice, there are various types of measures implicating different time constrains. On the one hand, there are the neurophysiological data. The preparation needed to collect this kind of data is arduous but, with a strong methodology, can be analyzed rapidly. On the other hand, there is the perceptual data. This data are mainly collected through interviews and need a lot of time to analyze and assess. Finally, there is a promising avenue towards putting forward enriched UX measures implicating both of those data types [21].

There is a gap in the literature and a need to answer this crucial question: Can enriched UX measures be performed quickly enough to be include in an agile development? Two literature reviews on the subject [16, 18] present interesting conclusions and avenues of reflection. One of the main trends seems to be to promote a specialist approach through which the UX work within an agile team is carried out by a specialized designer researcher [22]. Collaboration and communication are also recurring themes in Agile/UX literature. Communication is highlighted not only by the application of the “scrum” model but also by the use of visual artifacts: “We find that both sketches and design stories have critical roles, that these artefacts support creation and reflection, facilitate resolution of contradiction, and also work at a level of consciousness that is below the level of self-awareness” [23]. In addition, to facilitate the integration of experiential results into UX within a development based primarily on product functionality, the Little Design Up Front (LDUF) practice is the most widely adopted initiative in Agile/UX [16]. “LDUF reduces—but does not eliminate—the large amount of design work done through [User-Centered Design] at the beginning of the project so that more effort can be spent on functionality” [18]. This practice is also enriched by the Sprint 0 (i.e., initial sprint), a Sprint process whereby initial user research is done so that all stakeholders can jointly create a basic skeleton and ensure that all future Sprints add incremental real value to the project.

UX designers also often have to simultaneously perform multiple roles involving numerous tasks such as user research, market research, user-centered design, prototyping, usability inspection, user testing, visual design, feedback, and coding [18]. Consequently, they are usually in different working groups, if not in several departments, or even different subcontracted organizations; this evidently complicates coordination and communication between all stakeholders. Moreover, UX researchers find themselves in a unique situation where they have to learn not only to adapt to a new culture and work environment but also to become quickly familiar with the project that has been granted to them. Often, a project may have already been initiated and it may even be in a phase of advanced development, thereby requiring UX researchers to work rapidly to take it forward. While immersed in an agile approach, UX researchers find themselves working on smaller sections of the project simultaneously instead of considering the whole project, which additionally tends to change fast.

With the purpose of integrating an agile approach into an UX research methodology, we must clearly define the objectives to be achieved and ensure that the expectations of all stakeholders are realistic and well defined from the outset of the project. Especially, since “Once there is an established relationship with the client, and the team is familiar with both how they work together and with outside resources, they can better assess the consultant’s ability to work with them in an agile setting” [24].

The integration of the two approaches into a common methodology is based on two main strategies [25]: the first suggests that the UX team should become quickly integrated into the product development cycle so that it can understand the initial mission of the project and be present from the first decisions taken, and the second strategy suggests the use and deployment of “agile” tools to facilitate communication and documentation. These are mainly personas, usage scenarios, sketches, and concept maps to quickly understand the direction of the project as well as to facilitate message transmission to all the stakeholders of the project [23].

Regarding the importance of collaboration among the various stakeholders involved in the project, it is essential for all members to maintain constant communication and a working synergy to ensure the sharing of a common mission and vision. In addition, integrating targeted users at key points in the development process allows creators to respond appropriately to their needs. This way of working makes it possible to ensure a certain consistency and uniformity of the project as well as to more effectively control the expectations of the client. “In an ideal situation, UX development and research involves frequent, iterative user testing. Because agile focuses on smaller changes, it can be possible to conduct small-scale testing at various points throughout the process to ensure changes fit with UX expectations” [24]. These different parts of the project can also take the form of “Sprints” of the “development, testing, evaluation, and adjustment” cycle.

By adopting an agile approach, UX researchers tend to change their work methodology by reducing their activities, adopting a less formal process, and a more minimalist method [24]. Although the integration of an agile approach requires a restructuring of the UX experimental design, it is necessary to ensure that the integrity and enhanced value of the UX process are maintained, and even enriched with psychophysiological measures. The recent development of a laboratory management and analytics software platform for human-centered research now makes this kind of integrated process possible, which (a) enables accurate triangulation of enriched UX measures, (b) produces results in a timely manner, and (c) helps to generate meaningful recommendations [26].

Advertisement

3. Method

A multiple case study methodology was chosen as the preferred approach to investigate the project management practices that can be used to enable the enrichment of user experience evaluation while maintaining agility and speed in UX evaluation projects. We conducted 12 cases studies on usability test projects using enriched UX methods over a short period of time (maximum 2 weeks). All 12 tests were conducted by the same organization. In this chapter, we refer to these as a Sprint projects.

The multiple case studies thus make it possible to identify the inherent and recurrent markers [27, 28, 29] of the Sprint project’s management practices in order to better define and understand it in all its complexity. Different variables of the Sprint projects have been highlighted to better understand its mechanisms such as the project’s objective and its level of complexity, the execution (the UX team deployed and their work per hour ratio, the experimental design, the maturity of the stimuli (which were all prototypes), the tools used, the measures analyzed, and the time of completion), and, more specifically, the details of the tests (number of participants, recruitment process, and testing time which, for most of the projects, has been standardized to 12 participants and 1 h of testing). Finally, the degree of details in the test results has been presented in terms of the magnitude of the final report submitted (Table 1).

CaseObjectiveExecutionDetails of the testingResults
ObjectiveLevel of difficulty (1—easy to 5—difficult)Experimental design
(condition = version of the product)
(task = assignment)
Maturity of the stimuli (prototype or final product)Measures*Time spentLevel of difficulty (1—easy to 5—difficult)Number of participantsSampleTesting timeMagnitude of the report
1Analysis of user behavior53 conditions
3 tasks
1 interview
1 survey
PrototypeA&Cl
E
A (EDA) & A (EKG)
KPI
PI (SUS)
N&I
14 days of preparation
2 days of data collection (testing)
3 days of analysis
512Millennial –1 hour1 presentation
57-page report
2Analysis of user experience of a mobile app33 tasks
1 interview
3 surveys
A&Cl
E
A (EDA)
KPI
PI (SUS)
N&I
7 days of preparation
3 days of data collection (testing)
2 days of analysis
3.5Millennial –1 presentation
41-page report
3Analysis of user experience in interaction with a Chabot versus forms3.52 conditions
4 tasks
1 interview
3 surveys
A&Cl
E
A (EDA)
KPI
PI (SUS) & PI (SAM)
N&I
7 days of preparation
3 days of data collection (testing)
2 days of analysis
2.5Millennial –1 presentation
50-page report
4Analysis of user experience in interaction with a transactional Website2.54 tasks
(3 sub-tasks)
1 interview
1 survey
A&Cl
E
A (EDA)
KPI
PI (Att)
N&I
7 days of preparation
3 days of data collection (testing)
2 days of analysis
2Millennial –1 presentation
43-page report
5Analysis of user experience in interaction with a transactional website, version 2.34 tasks
(4 sub-tasks)
1 interview
2 surveys
A&Cl
E
A (EDA)
KPI
PI (SUS) & PI (Att)
N&I
7 days of preparation
3 days of data collection (testing)
2 days of analysis
2.5Millennial –1 presentation
35-page report
6Analysis of user experience in interaction with a transactional website, version 3.2.54 tasks
(4 sub-tasks)
1 interview
2 surveys
A&Cl
E
A (EDA)
KPI
PI (SUS) & PI (Att)
N&I
7 days of preparation
3 days of data collection (testing)
2 days of analysis
2.5Millennial –1 presentation
58-page report
7Analysis of user experience in interaction with two different versions of a web and mobile interface4.54 scenarios
15 tasks
2 conditions
4 interviews
1 survey
A&Cl
E
A (EDA) + A (EKG)
KPI
PI (Wq)
N&I
7 days of preparation
4 days of data collection (testing)
4 days of analysis
4.5Millennial –1 presentation
102-page report
8Evaluate different age groups’ user training and change management in interaction with a web site2.56 tasks
1 interview
2 surveys
A&Cl
E
A (EDA)
KPI
PI (SUS) * PI (SAM)
N&I
7 days of preparation
3 days of data collection (testing)
2 days of analysis
2.5Millennial –
&
Baby boomer –
1 presentation
56-page report
9Evaluate different age groups’ user training and change management in interaction with a web site, version 2.3.52 conditions
5 tasks per condition
1 interview
1 survey
A&Cl
E
A (EDA)
KPI
PI (Wq)
N&I
7 days of preparation
3 days of data collection (testing)
2 days of analysis
3Millennial –1 presentation
54-page report
10Analysis of user experience while opening an account on a smart phone mobile app2.51 task
1 interview
3 surveys
A&Cl
E
A (EDA)
KPI
PI (SUS); PI (Wq) & PI (Att)
N&I
7 days of preparation
3 days of data collection (testing)
2 days of analysis
2Millennial –30 minutes1 presentation
49-page report
11Analysis of user experience while opening a professional account on a smart phone mobile app2.54 tasks
1 interview
3 surveys
A&Cl
E
A (EDA)
KPI
PI (SUS); PI (Wq) & PI (Att)
N&I
7 days of preparation
3 days of data collection (testing)
2 days of analysis
2.5Baby boomer –1 hour1 presentation
79-page report
12Analysis of user experience during an online mortgage application process from a computer31 task
1 interview
2 surveys
A&Cl
E
A (EDA)
KPI
PI (SUS) & PI (Wq)
N&I
7 days of preparation
3 days of data collection (testing)
4 days of analysis
3.5Gen X1 presentation
75-page report

Table 1.

Twelve case studies and their methodological insights.

Tools: A&CL = attention and cognitive load is measured through Pupil and Gaze by a Tobii eye tracker; E = emotions are measured through facial expressions by FaceReader software; A = arousal is measured through electrodermal (EDA) and electroencephalogram (EKG) data with Biopac instruments; KPI = key performance indicators are measured by observation; PI = psychometric indices (PI) are measured through surveys such as the System Usability Scale (SUS), Webqual (Wq) and Attrakdiff (Att), and SAM scale (SAM); NI = customers need (N) and insights (I) are identified through interviews and analyzed with Optimal Workshop.


Data were collected using structured interviews with at least three members of each project. The structured interview covered the following properties for each project: (i) objective of the usability test; (ii) difficulty of the objective (1—easy to 5—difficult); (iii) description of the experimental design; (iv) maturity of the stimuli (prototype); (v) tools used and measures analyzed in the test; (vi) time to execute the test; (vii) difficulty of the execution (1—easy to 5—difficult); (viii) number of participants; (ix) population; (x) testing time (in minutes); and (xi) magnitude of the report (in pages). To evaluate the difficulty measures, we averaged the answer of the respondents. Table 1 provides a summary description of all 12 projects that are presented in chronological order.

The interviews also included open-ended questions focused on project management practices. Questions covered project planning, project management, communication and coordination in the team, status of work with the external client, project execution, and analysis management.

Advertisement

4. Result

The 12 projects involved in this study are descripted in Table 2. In total, these projects necessitated the participation of 144 typical users (experts and neophytes), deployed 4 neurophysiological tools and 4 psychometric tools, and concluded with 799 pages of reports. It should be noted that the organization conducted regular debriefing sessions with the client to outline the failures and accomplishments. We can observe that over time, the projects experienced a significant reduction in execution time, human intervention, level of difficulty, and costs by standardizing the methodology. We went from a 19-day project to a 12-day project (including preparation time), from a 20 expert (internal staff and external sponsors) implication to a core team of only 4 experts, and from a level of difficulty of 5–2.5, which all ultimately affect the cost of operations.

Based on the interviews and the observation, it has been possible to put forward the following conclusions. To execute a Sprint projects, many considerations have to be taken into account:

  1. The nature of the research: the nature of the research question impacts the completion time and the complexity of the project.

  2. The nature of the elements:

    1. Human: need to communicate regularly with the design clients and various project stakeholders and jointly establish the mandate and experimental design with the concerned design clients.

    2. Technical: need to anticipate the collected measurements and enhanced results using a mosaic of hybrid collection methods.

  3. The nature of the results: need to adapt the manner of presenting the results in order to underline the operational side without reducing the quality of the work performed.

  4. The time constrains: (a) need to adjust the granularity (level of details) of the project according to the research question; (b) need to introduce pre-tests to provide last-minute adjustments on site; (c) need to carefully evaluate the time allotted for the project; and, thus, (d) need for scheduling.

These strategies are focused on meeting the clients’ expectations of time, budget, and UX issues.

4.1 Based on the nature of the research

4.1.1 Research question type

Every UX research begins with a question. The nature of this issue has a direct impact not only on the completion of the UX tests but also on the complexity of the tests. This complexity depends on the nature of the stimuli studied and on the level of authenticity of the desired context of use. Indeed, the research question determines the nature of the stimuli, that is, whether they are static or dynamic. For example, studying the navigation of a Website on a computer screen underlies the deployment of static stimuli which, a priori, is easy data to analyze. Static stimuli require shorter coding and analysis time than dynamic stimuli, for example, the study of a game application on mobile. The same applies to the choice of data collection tools deployed. Coding and analyzing data from an eye tracker does not represent the same workload as coding and analyzing data from an electroencephalography (EEG) headset.

Moreover, the research question directly influences the choice of the context of use in which the experiment takes place and the importance of the level of authenticity to be respected. Inevitably, undertaking an experiment in a real-life context does not underlie the allocation and deployment of the same resources (material and human) and the same time space for its realization in a laboratory context. Dynamic stimuli and the context of authentic use are the most important limitations of Sprint projects. It is not said that they are not feasible in a short period of time, but the data that can be collected and the degree of analysis that can be achieved are more resource intensive. It is, therefore, important to explicitly communicate these limitations to the clients during the initial stages of development of the project, in order to limit the frustrations that they may generate. It is also important to note that this type of project cannot be applied in the fundamental research framework, although it relies on the results of this research to improve their structure. In other words, the co-researchers aim to propose project management structures that reflect the current industry needs.

4.2 Based on the nature of elements

4.2.1 Human: communication and coordination

Communication and coordination between different stakeholders of the project are key factors to the smooth functioning of the process. Communication is carried out by daily calls, and sometimes through meetings with the design clients or within the research team itself. The research team also invites various clients to attend the data collection to ensure that there is a common understanding of each step of the UX process. The quality of communication between the clients mainly influences the joint construction of the mandate and the experimental design. This step is vital for clarifying everyone’s expectations as well as the potential results of the experiment.

“For example, in the 7th case study, three different clients were involved in the project. Consequently, our research team had to coordinate with all the clients to ensure that the understanding and expectations of the project were the same for everyone. In the final days leading up to the pre-tests, conference calls lasting from one to two hours with all the stakeholders were organized.” (Project Manager)

4.2.2 Technical: hybrid data collection method

The anticipation of measurements and results is also at the center of the agile/UX process developed by the research team. In parallel with the definition of the mandate and the division into use scenarios, the research team continually tries to foresee the structure of the presentation of the results while being flexible. Empirical data, both implicit (lived experience assessed with psychophysiological measures) and explicit (perceived experience assessed with self-reported questionnaire and interview), are considered. This anticipation is carried out using a systematic methodology of foreseen codification of the psychophysiological—emotional and cognitive—measures within the clarification of the mandate and the experimental design. The triangulation of measures also makes it possible to anticipate the potentially interesting results that will answer the client’s questions. This triangulation is achieved through a mosaic of proven collection methods [5, 30, 31]. The use of several data collection technologies of variable nature (physiological, psychological, and behavioral) ensures an enriched data collection. Consequently, this anticipatory effort allows the UX team to be one development cycle ahead of others and to accelerate the whole process of analyzing the collected data. Comparative empirical data methodology is also deployed. By comparing different conditions of use, design elements, or even groups of users, decision-making becomes more objective, concrete, and easy for the team of designers.

“For example, in the 7th case study, the project involved the collection of implicit data from eye tracking (Tobii), recognition of facial expressions (Facereader, Noldus), electrodermal activity (Biopac) as well as electrocardiogram (Biopac) and explicit data from usability scale questionnaires, performance indicators and interviews. This arsenal of tools was deployed to understand the “what and when” of interaction by triangulating valence (positive or negative), activation (weak or strong) and cognitive (easy or difficult) reactions, as well as the why of interaction through the verbalization of perceived experiences. The hybridization of all the data on the cognitive and emotional load thus created a global portrait of the interactive experience between the users and the product.” (UX Lead)

4.3 Based on results

4.3.1 Data visualization

Finally, the lab team has developed a unique and innovative way of presenting its results to facilitate the transmission of knowledge to clients and development teams. By aggregating and triangulating the arsenal of empirical data collected, the laboratory’s researchers have succeeded in creating a methodology for simplifying and making the data more accessible. The results of this methodology are the visualizations of the interactions through the creation of UX heatmaps [5, 30, 31]. These heatmaps offer an “easy to interpret UX evaluation tool which contextualizes users’ signals while interacting with a system. Using these signals to infer the users’ emotional and cognitive states and mapping these states on the interface provide researchers and practitioners with a useful tool to contextualize users’ reactions” [10].

“For example, in the 8th study case, the presentation of the final report including the results of the UX research was carried out with the client’s design team and several decision-makers. Using empirical and perceptual data visualization tools, managers from different departments who do not face this type of research on a daily basis quickly realized which of the products studied best met the usability objectives, thus having clear facts with which to make their decision.” (UX Lead)

4.4 Based on time constrains

4.4.1 Granularity

The granularity of the project follows the definition of the mandate. Generally, the research team uses a list of questions from the clients as a baseline to translate them into defined actions. In other words, the UX team restructures the project by dividing it into different evaluation conditions. These conditions typically result in distinct usage scenarios that are not necessarily related to product functionality. These similar condition divisions allow the UX team to define the evaluation markers, as well as the performance indicators more easily, in order to facilitate the assessment of the overall and specific user experience.

“For example, in the 2nd case study, the customer wished to evaluate the efficiency and efficacy of three functionalities of its new product in development. After numerous exchanges, our research team translated this mandate into an operational experimental design that included the testing of both their old and new products with two different comparable evaluation conditions. The first one consisted in testing the 3 functionalities on the old product with existing users in order to establish a comparison baseline. Then, by deploying the theory of learning, the three functionalities were tested randomly three times on the new product. The third repetition was the one that was compared between products.” (UX Lead)

4.4.2 Pre-test

As each project has its own specificities and distinguishes itself from others, pre-tests are always necessary. Undertaken in a short time span, these pre-tests allow the UX team to make final adjustments before starting the data collection with the participants. Three pre-tests are usually performed. The first is a technical test to ensure that all collection and analysis instruments are functional and set up properly to facilitate collection. The second is done with a member of the team to evaluate the time and fluidity of the experimental task. The third test is done with an external participant to ensure the understanding of each step of the experimental task and to avoid any misunderstanding during the data collection.

“For example, in the 6th case study, while performing the pre-tests, our research team realized that one of the tasks could not be done in the sequence that was proposed initially, and this caused a major change in the experimental design and protocol. The pre-tests prevented loss of data from one of the recruited participants.” (Lab Manager)

4.4.3 Standardization of the planning and methodology

With each successive Sprint project, the research team gradually standardized the process and practice to enhance their execution in terms of speed, efficiency of human resources, and costs. Indeed, the team put together a concise timetable detailing every step of a Sprint project where responsibilities for the research team and the design client are granted, and deadlines are specified. This timeline presents, on one hand, the elements of macro-planning in terms of weeks. Depending of the maturity and knowledge of the design client about their context of intervention as the product or service they wish to test, this preparation phase is variable and flexible. Furthermore, as regards the academic context, the submission of the ethics certification (considering the academic research context) requires many weeks of anticipation, since this is to ensure that all approvals have been obtained before starting the user experience testing. However, if the Sprint project is a sequel to a previous one or if a design client has already made a Sprint project and wishes to carry out a second one, this preparatory phase gradually decreases in terms of time since it increases in terms of efficiency. On the other hand, the elements of micro-planning in terms of days and hours, such as details of the execution, are specified and are the main interest of this standardized timeline (Table 3).

All projectsMean and median per project
ExecutionExperimental design1–4 conditions
1–15 tasks
1–4 sub-tasks
1–4 interviews
1–3 questionnaires
2 conditions
4 tasks
2 sub-tasks
1 interview
2 questionnaires
Tools used3–4 neurophysiological tools
2–4 psychometric tools
Observation (performance
indicators)
2 neurophysiological tools
3 psychometric tools
Observation (performance
indicators)
Time7–14 days of preparation
2–3 days of data collection
2–3 days of analysis
7 days of preparation
3 days of data collection
2 days of analysis
Details of the experimentParticipants144 typical users12 participants
Testing time30 min to 1 h1 h
ResultFinal report799 pages66 pages

Table 2.

Statistics for the totality of the Sprint projects and mean per project.

ResponsibilitiesSprint
M-1D-14D-7D-5D-3D-2D-2D-1D1D2D3D4D5D7
09:0010:0010:0014:0014:00
Initiation of the project
Initial meetingUX team/client
Ethics certification submission (according to the context of intervention: academical vs. industrial)UX team
ContractClient
Clarification of the mandate
Internal planning (room reservation, etc.)UX team
Compensation for the participantsUX team/client
Client’s involvementClient
Hours of data collectionUX team/client
Definition of roles and responsibilitiesUX team/client
Final mandateClient
Recruitment criteriaUX team/client
Fine tuning of the experimental design
Experimental designUX team
Delivery of the first version prototypeClient
QuestionnairesUX team
RecruitmentUX team or client
Validation of the experimental designUX team/client
Number of markersUX team
Planning of the markersUX team
Pre-tests and validation
Final prototype deliveryClient
Prototype validationUX team
Technical pre-testUX team
Participants’ listClient
CompensationUX team or client
Last minute adjustments on the prototypeClient
Internal pre-testUX team
Internal validation of the experimental designUX team
External pre-testUX team
External validation of the experimental designUX team
Protocol adjustmentUX team
Data collection
Day 1 of data collectionUX team
Day 2 of data collectionUX team
Codification
ExtractionUX team
CodificationUX team
Analysis
AnalysisUX team
Report preparationUX team
Presentation
Report presentationUX team

Table 3.

Macro and micro planning standardization.

This normalized timeline presents the critical path of a Sprint project: (1) project kick off; (2) mandate definition; (3) experimental design fine tuning; (4) pre-test and validation; (5) data collection; (6) codification; (7) analysis; and (8) final presentation. Aiming to be completely transparent, this normalized timeline’s intentions are to help all the project stakeholders to understand the critical steps that could delay the project, identify the persons in charge of the various steps, so as to avoid any misunderstanding and repetition of efforts. Moreover, it can be taken as a list of actions to be considered when starting a UX research project.

4.4.4 Time allotted

Another important aspect to consider during Sprint projects is the time allotted for carrying out the tests. This similar aspect turns into one of the limitations of agile/UX research. Indeed, for a Sprint project to be realized in 1 week, the experience of using the evaluated product or service can hardly exceed 1 h without having a direct consequence on the realization and costs. The time allotted for data collection shall inevitably include: (a) greeting of the participant; (b) signature of the consent letter; (c) assembly and calibration of the data collection apparatus; (d) performance of the experimental tasks; (e) questionnaires and interviews; (f) removal of the equipment; and (g) handing over of the compensation. Consequently, the completion of the three series of pre-tests takes on added importance, as it allows the UX team to ensure that the time allocated is not exceeded.

4.4.5 Scheduling

Finally, the third major challenge is to meet the strict timetable laid out by every project client. Delays in defining the protocol or delivering the ready-to-test prototype have a direct impact on the implementation of the Sprint project. Depending on the availability of experimental rooms, delays may postpone the project for several days, weeks, and even months. They can also generate significant costs for each of the project clients.

Advertisement

5. Discussion

In a Sprint project, the deployment of these strategies poses many challenges for the UX team, which is responsible for clarifying the nature of the questions asked and the time allotted for carrying out the tests to ensure compliance with the timetable. These challenges inevitably compromise the balance of the iron triangle—scope, time, and cost—[32] defined in project management as the key to quality.

As mentioned before, each of the Sprints projects ended with a debriefing between our research team and the client(s) involved in the project. These debriefings shed light on the mistakes made and the improvements for the subsequent projects on which iterations were made. From these improvements, it is possible to cite: (1) the level of management on the project which directly points towards; (2) improved communication in the preparatory phase; (3) the setting up of a statement of work; and (4) the training of research assistants.

5.1 Leadership, management, and communication

In a context where we are in an academic research lab doing applied research with industrial clients, those last ones are not always educated about the possibilities and limitations of UX testing that can be carried out. It is, therefore, important for the main research team to educate and guide clients when it comes to defining the research question and experimental design. First-time projects with new clients inevitably require additional time and effort in the preparatory phase. Bearing this in mind, the research team must, therefore, develop an educational strategy to facilitate this phase.

5.2 Statement of work

With the same objective of facilitating communication and building a common project, the establishment of a statement of work (SOW) can be considered. It may not always be possible to fulfill every desire of the client in a laboratory setting. It is, therefore, important to set ground rules that clearly distinguish which aspects of the project are flexible from those that are less adaptable, in order to limit subsequent frustrations on both sides of the stakeholders.

5.3 Training of research assistants

Finally, it is also important to consider the academic setting in which the lab team operates. Consequently, research assistants are students so that we must continually trained and mentored. This creates an extra level of preparation. UX tests require, a priori, rigorous preparation. Since research assistants are very involved in collecting and codifying the collected data, they must be able to understand everything that these UX tests imply.

Hence, an examination of the different case studies of Sprint projects has allowed us to highlight several strategies and lessons learned in the hybridization of an agile/UX approach. Notwithstanding, far from being a definitive methodological proposal that meets all the requirements of an efficient approach, the co-authors especially wanted to shed light on interesting lines of thought. The objective of such research is to find an approach that maintains and makes the iron triangle of project management more sustainable, i.e., reducing the time and operational costs while maintaining the quality and scope required by the client. Research is, therefore, continuing in this direction not only in improving the operationalization of such methods but also to make progress in the efforts to systematize the codification, analysis, and visualization of enriched UX measurements.

Since it is a new way of thinking Agile and UX methodologies, it is difficult to find different organizations that implement this methodology, explaining why all case studies are principally conducted within the same laboratory. Even though it helps to test and improve the methodology, it is also considered as a limitation. Since there are clear benefits for industrial practice in this area of research, studying other organizations that implement a similar or different methodology—merging Agile and UX approaches—would be beneficial for in depth and future research.

On the one hand, the classic UX research model that generally prioritizes the perceptual facet of UX (focus group, interview, questionnaire, and observation) is enriched by psychophysiological UX measures. On the other hand, given the costs involved in purchasing the devices, their deployment, and the training required for the application of psychophysiological-enriched UX measures, the co-authors sees a great opportunity for complementarity and transfer of knowledge between industry and academia, principally because it allows the industry to take advantage of the academic environment and research context, in order to explore the best avenues for an agile/UX development approach.

Advertisement

6. Conclusion

In conclusion, to the initial question as to whether enriched UX measures can be performed quickly enough to be included in an agile development, the answer is, therefore, yes. To create a UX project that follows the agile development guidelines, the key to success is to be able to steer its approach on numerous small incremental phases oriented on the users. The working sets as defined in the agile development oriented in functionalities should, as far as possible, be aligned with the users’ needs and be tested by them early in the process, and later on during all the development phases, to insure that the final product is integrated harmoniously into the user’s life and is adapted to his or her needs.

Furthermore, as proved by the case studies, significant findings could be made from testing the concepts and prototypes on participants. Therefore, it is important not to wait to have a finished and polished product before involving the user and having his or her perspective. With as few as a dozen participants and with a timeline of 2 weeks, it is possible to obtain quick insights that redirect the project and that better align it with the real user’s needs. The tests could be made on specific features, as well as on a complete product aiming to quickly eliminate erroneous assumptions. According to what is tested, different tools are available and various approaches can be deployed. It is then that the UX designer’s expertise becomes important to identify which tools and approaches will help to obtain the desired answers.

However, as easy it may seem at first glance, the variables to be considered are numerous and deeply exposed in this article—the nature of the research, the nature of the elements (human and technical), the nature of the results, and the time constrains. Several answers about how the research team has been able to improve their effectiveness can be found in the article. Nonetheless, adding to these variables, it is also important to consider the prevalent mentality in certain industries, and very well anchored in conservative industries—such as banks, insurance, governments, etc.—that often do not want to test concepts or ideas—based on a fear of industrial espionage or reputation issues—and wait until they develop a full and finished product. Within these specific industries, an educational phase will be the first step to implement innovative approaches that aim towards agility and speed in enriches UX evaluation projects.

References

  1. 1. Cohen D, Lindvall M, Costa P. Agile software development. DACS SOAR Report 11. Fraunhofer Center Maryland, USA; 2003
  2. 2. Cooper A, Reimann R, Cronin D. About Face 3: The Essentials of Interaction Design. New York City, United States: John Wiley & Sons; 2007
  3. 3. Goodwin K. Designing for the Digital Age: How to Create Human-Centered Products and Services. John Wiley & Sons; 2011
  4. 4. Hartson R, Pyla PS. The UX Book: Process and Guidelines for Ensuring a Quality User Experience. Elsevier; 2012
  5. 5. Courtemanche F, Léger P-M, Dufresne A, Fredette M, Labonté-LeMoyne É, Sénécal S. Physiological heatmaps: A tool for visualizing users’ emotional reactions. Multimedia Tools and Applications. 2018;77(9):11547-11574
  6. 6. Georges V, Courtemanche F, Sénécal S, Léger P-M, Nacke L, Pourchon R. The adoption of physiological measures as an evaluation tool in UX. In: Proceedings Paper Presented at the International Conference on HCI in Business, Government, and Organizations. Vol. 10294. Vancouver, BC, Canada: Springer; 9-14 July 2017
  7. 7. Law ELC, van Schaik P, Roto V. Attitudes towards user experience (UX) measurement. International Journal of Human-Computer Studies. 2014;72(6):526-541
  8. 8. Vermeeren AP, Law ELC, Roto V, Obrist M, Hoonhout J, Väänänen-Vainio-Mattila K. User experience evaluation methods: Current state and development needs. In: Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries. ACM; 2010. pp. 521-530
  9. 9. Sohaib O, Khan K. Integrating usability engineering and agile software development: A literature review. In: Paper Presented at the Computer Design and Applications (ICCDA), 2010 International Conference. Qinhuangdao, China; 25-27 June 2010. p. 38
  10. 10. Georges V, Courtemanche F, Senecal S, Baccino T, Fredette M, Leger P-M. UX heatmaps: Mapping user experience on visual interfaces. In: Paper Presented at the Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. California, USA; 2016
  11. 11. Georges V, Courtemanche F, Sénécal S, Baccino T, Léger P-M, Frédette M. Measuring visual complexity using neurophysiological data. In: Information Systems and Neuroscience. Springer; 2015. pp. 207-212
  12. 12. Léger P-M, Sénécal S, Auger C. Le défi de l’expérienceutilisateur. Gestion. 2015;40(2):50-57
  13. 13. De Guinea AO, Titah R, Léger P-M. Measure for measure: A two study multi-trait multi-method investigation of construct validity in IS research. Computers in Human Behavior. 2013;29(3):833-844
  14. 14. De Guinea AO, Titah R, Léger P-M. Explicit and implicit antecedents of users' behavioral beliefs in information systems: A neuropsychological investigation. Journal of Management Information Systems. 2014;30(4):179-210
  15. 15. Tams S, Hill K, de Guinea AO, Thatcher J, Grover V. NeuroIS-alternative or complement to existing methods? Illustrating the holistic effects of neuroscience and self-reported data in the context of technostressresearch. Journal of the Association for Information Systems. 2014;15(10):723
  16. 16. Da Silva TS, Martin A, Maurer F, Silveira M. User-centered design and agile methods: A systematic review. In: Paper Presented at the Agile Conference (AGILE). Salt Lake City, USA; 2011. p. 2011
  17. 17. Robert J-M. Vers la plénitude de l'expérienceutilisateur. In: Paper Presented at the Proceedings of the 20th Conference on l'Interaction Homme-Machine. New York, USA: 2008. p. 4
  18. 18. Jurca G, Hellmann TD, Maurer F. Integrating agile and user-centered design: A systematic mapping and review of evaluation and validation studies of agile-UX. In: Paper Presented at the Agile Conference (AGILE). Vol. 2014. 2014. p. 25
  19. 19. Sy D. Adapting usability investigations for agile user-centered design. Journal of Usability Studies. 2007;2(3):113
  20. 20. Ferreira J, Sharp H, Robinson H. User experience design and agile development: Managing cooperation through articulation work. Software: Practice and Experience. 2011;41(9):963-974
  21. 21. Krawczyk P, Topolewski M, Pallot M. Towards a reliable and valid mixed methods instrument in user eXperience studies. In: Engineering, Technology and Innovation (ICE/ITMC), 2017 International Conference on. Madeira Island, Portugal: IEEE; 2017. pp. 1455-1464
  22. 22. Fox D, Sillito J, Maurer F. Agile methods and user-centered design: How these two methodologies are being successfully integrated in industry. In: Paper Presented at the Agile, 2008. AGILE'08. Conference. Toronto, Canada; 2008. pp. 63-72
  23. 23. Brown J, Lindgaard G, Biddle R. Stories, sketches, and lists: Developers and interaction designers interacting through artefacts. In: Paper Presented at the Agile, 2008. AGILE'08. Conference. Toronto, Canada; 2008. p. 39
  24. 24. Meingast M, Ballew T, Edwards R, Nordquist E, Sader C, Smith D. Agile and UX: The road to integration the challenges of the UX practitioner in an agile environment. Paper Presented at the Proceedings of the Human Factors and Ergonomics Society Annual Meeting. San Diego, California, USA; 30 September, 4 October 2013. p. 1002
  25. 25. Kollmann J, Sharp H, Blandford A. The importance of identity and vision to user experience designers on agile projects. In: Paper Presented at the Agile Conference, 2009. AGILE'09. Chicago, USA; 2009. p. 12
  26. 26. Léger P-M, Courtemanche F, Fredette M, Sénécal S. A cloud-based lab management and analytics software for triangulated human-centered research. In: Proceedings NeuroIS Retreat 2018. 2018
  27. 27. Baxter P, Jack S. Qualitative case study methodology: Study design and implementation for novice researchers. The Qualitative Report. 2008;13(4):544-559
  28. 28. Tellis WM. Application of a case study methodology. The Qualitative Report. 1997;3(3):1-19
  29. 29. Yin RK. Case Study Research: Design and Methods. Sage Publications; 2013
  30. 30. Léger P-M, Davis FD, Cronan TP, Perret J. Neurophysiological correlates of cognitive absorption in an enactive training context. Computers in Human Behavior. 2014;34:273-283
  31. 31. Léger P-M, Sénécal S, Courtemanche F, de Guinea AO, Titah R, Fredette M, et al. Precision is in the eye of the beholder: Application of eye fixation-related potentials to information systems research. Journal of the Association for Information Systems. 2014;15(10):651
  32. 32. Atkinson R. Project management: Cost, time and quality, two best guesses and a phenomenon, it’s time to accept other success criteria. International Journal of Project Management. 1999;17(6):337-342

Written By

Juliana Alvarez, David Brieugne, Pierre-Majorique Léger, Sylvain Sénécal and Marc Frédette

Submitted: 30 August 2019 Reviewed: 17 September 2019 Published: 13 November 2019