Open access peer-reviewed chapter

The Internet Implementation of the Hierarchical Aggregate Assessment Process with the “Cluster” Wi-Fi E-Learning and EAssessment Application — A Particular Case of Teamwork Assessment

Written By

Martin Lesage, Gilles Raîche, Martin Riopel, Frédérick Fortin and Dalila Sebkhi

Submitted: 26 November 2014 Reviewed: 19 May 2015 Published: 21 October 2015

DOI: 10.5772/60850

From the Edited Volume

E-Learning - Instructional Design, Organizational Strategy and Management

Edited by Boyka Gradinarova

Chapter metrics overview

1,605 Chapter Downloads

View Full Metrics

Abstract

A Wi-Fi e-learning and e-assessment Internet application named “Cluster” was developed in the context of a research project concerning the implementation of a teamwork assessment mobile application able to assess teams with several levels of hierarchy. Usually, teamwork assessment software and Internet applications for several hierarchy level teams are included in the field of Management Information Systems (MIS). However, some assessment tasks in teams with several levels of hierarchy and assessment may be performed in an educational context, and the existing applications for the assessment and evaluation of teams with several levels of hierarchy are not applications dedicated to the assessment of students in an educational context. The “Cluster” application is able to present the course material, to train the students in teams as well as to present individual and team assessment tasks. The application’s special functionalities enable it to assess the teams at several levels of hierarchy, which constitute the hierarchical aggregate assessment process. In effect, the members of the teams may have appointments of team member, team leader and team administrator that supervises team leaders. This application can therefore evaluate simultaneously different knowledge and skills in the same assessment task based on the hierarchical position of the team member. The summative evaluation of the application consists of work to submit as well as objective examinations in HTML format, while the formative evaluation is composed of assessment grid computer forms of self-assessment and peer assessment. The application contains two mutually exclusive modes, the assessor mode and the student mode. The assessor mode allows the teacher to create courses, manage students, form the teams and also assess the students and the teams in a summative manner. The student mode allows the students to follow courses, write exams, submit homework, perform in teams and submit self- and peers formative assessment. The theoretical consideration of the project establishes the link between hierarchical aggregate assessment applications and management information systems (MIS). The application is an electronic portfolio (e-portfolio) management system in the competency-based learning and an Internet test administration system in the mastery learning approach. The aim of the chapter is to introduce the reader to the field of hierarchical aggregate assessment and to show how to implement complex assessment tasks with several levels of hierarchy into an Internet software application.

Keywords

  • E-learning
  • E-assessment
  • Teamwork assessment
  • Hierarchical aggregate assessment

1. Introduction

1.1. General

The current research project is in the assessment field of education. The members of the project has developed an Internet Wi-Fi application that can assess teams with several levels of hierarchy. This application could be considered as an assessment management system (AMS). The application is a complex assessment task in collaborative mode display engine. In fact, during the assessment task, team members can be appointed as team members, team leaders and team administrators that supervise team leaders. These appointments define the hierarchical levels used in the software application. This application is able to process and manage courses, course material, students, teams, hierarchical appointments, assessment tasks, student’s curriculums, student’s progression in courses and also summative and formative assessments. The application stores all the assessment data to accelerate the organization’s assessment process at all hierarchy levels. Hierarchical aggregate assessment of learning in the education domain is a subfield of teamwork assessment where teams have several levels of hierarchy and supervision. Team members are either students or members of any organization that participates in teams in a collaborative mode complex assessment task. In the mastery learning paradigm, this application is a system that presents exams [1] or a system that presents tests to be solved in teams [2] that is a test management system in comparison with the competency-based approach paradigm that defines the application as a collaborative mode complex assessment task display engine [3] and an electronic portfolio (e-portfolio) management system[4] because the application stores all the summative and the formative assessments of presented tests and tasks in its database.

Hierarchical aggregate assessment is a teamwork assessment project that groups students in teams with several levels of hierarchy and assign them a hierarchical position as team member, team leader and team administrator to present them complex assessment tasks in a collaborative mode in an authentic context. When the assessment task is completed, the actual teams are dissolved and the team members are grouped in new teams with new hierarchical positions to perform another assessment task. One of the goals of this chapter is that the term “hierarchical aggregate assessment” to be accepted by the scientific community. This process is shown in Figure 1.

Figure 1.

Hierarchical aggregate assessment process

Hierarchical assessment process is applied everywhere teams have several levels of hierarchy. This process could execute itself either manually or automatically with computerized algorithms executed on computer or Internet servers driving Wi-Fi applications. This process finds its origins in the management field where it is applied since human race worked in teams in large organizations. This process surely has been executed by Julius Caesars’s generals to assess combat effectiveness of soldiers and their officer’s leadership to lead troops in combat.

Hierarchical aggregate assessment includes the standard or the conventional assessment field that provides the same type of assessment for all the students in the class. Hence standard or conventional assessment process is the assessment of the same abilities, performances, knowledge and skills in the same assessment task. So standard or conventional assessment is a particular case of the hierarchical aggregate assessment field. Hierarchical aggregate assessment includes the standard or the conventional assessment and is the assessment of different abilities, performances, knowledge and skills in the same assessment tasks according to the hierarchical position assigned to the team member as shown in figure 2.

Figure 2.

Situation of hierarchical aggregate assessment in the assessment field

1.2. Objectives of the actual research

The objectives of the actual research that is also the subject of a doctoral dissertation is the automation and the computerization of the hierarchical aggregate assessment process with Internet applications and mobile technologies (Wi-Fi). With computer algorithms, Internet applications and mobile technologies, teamwork could be done over the Internet with collaborative work applications used by team members. An Internet application named “Cluster” was developed by researchers of the CDAME [5] laboratory for a PhD project to automate and computerize the hierarchical assessment process with the research and development (R & D) methodology for the development of educative products stated by Harvey and Loiselle [6]. This application currently resides at the following address: http://eval.uqam.ca/cluster/.

1.3. Fields and application domains

The process of hierarchical aggregate assessment has been performed everywhere by mankind throughout the ages. Although the process of hierarchical aggregate assessment was performed through ages, no scientist has considered to define a particular case of teamwork assessment where team members have several levels of hierarchy. The domain of hierarchical aggregate assessment first situates itself in the field of management and its computerization is in the field of computer science. However, the actual research also wants to situate this process in the field of education through complex assessment tasks in collaborative mode with an authentic context, as shown in Figure 3.

Figure 3.

The field of hierarchical aggregate assessment

In effect, in the field of education, it can happen that courses or complex assessment tasks that could be performed in teams can have several hierarchical levels. The authentic context under the hierarchical aggregate assessment occurs when students perform the task in a similar environment to the workplace. This context also applies to the use of mobile technologies (Wi-Fi) in the workplace through which students can perform a complex assessment task in collaborative mode through their cell phone, iPad, iPod or laptop. The use of information technologies in the process of hierarchical aggregate assessment ensures that this process can take place in the field of mobile learning and especially in the mobile assessment field.

The “Cluster” Internet application is a complex assessment task presentation engine in collaborative mode with an authentic context that implements the hierarchical aggregate assessment process. One of the goals of this chapter is to formally define the domain of hierarchical aggregate assessment to be accepted and recognized by the scientific community.

1.4. Chapter structure and organization

This chapter will first define the problematics and the theoretical framework of the hierarchical aggregate assessment field. This chapter will then describe the computerized implementation of the hierarchical aggregate assessment process in the field of education. This process is actually implemented with the research and development (R & D) methodology of educational products defined by Harvey and Loiselle [6] using the “Cluster” Internet application. This chapter will finally present and discuss the results of testing of the “Cluster” application by high school students of the school board of Montreal in the study of geology and by army cadets for the learning of cartography by performing navigation patrols in teams.

Advertisement

2. Problematics

2.1. General

None of the teamwork assessment authors in the education domain as Sugrue, Seger, Kerridge, Sloane and Deane [7], Volkov and Volkov [8] and Baker and Salas [9] have specifically studied the field of teamwork assessment where teams have several levels of hierarchy. Usually, the assessment of organizations with several levels of hierarchy and supervision is part of the Management Information Systems (MIS) field. However, some teamwork assessment tasks in the field of education can have several levels of hierarchy. So, it is important to explore this domain to add new research and theories into the education and assessment field. This new field of research could develop interesting Internet software application as assessment management systems in competency-based learning (AMS) and test assessment systems (TAS) in mastery learning.

Until now, no scientist and no domain expert in the fields of management, information technology, education and assessment has studied and defined hierarchical aggregate assessment. No scientist has yet found a name to define an assessment process with several levels of hierarchy that has always been applied everywhere and has always existed. This process executes itself when individuals are grouped in teams with several hierarchy levels in order to accomplish a task. The research described in this chapter will cause changes and provide a name of this complex process that will be “hierarchical aggregate assessment”. This definition will eventually be recognized by the scientific community.

2.2. Teamwork assessment

The problematics that is at the base of the process automation foundations of the assessment process of teams with several hierarchy levels resides in the development of a procedure or a computer application. According to Loiselle [10], the research and development methodology (R & D) of educational products is at the origin of the creation of educational products and the induction of theories produced by researchers throughout the development cycles of educational product development. In the case of the actual research, an Internet application implementing the hierarchical aggregate assessment process has been developed by researchers of the CDAME laboratory according to the research and development methodology (R & D) of educational products. The process of hierarchical aggregate assessment is the theory induced by the process of research and development for the implementation of an Internet application able to process the assessment of teams with several levels of hierarchy, as shown in Figure 4.

Figure 4.

Hierarchical aggregate assessment in the field of education

The hierarchical aggregate field defines itself as a subfield of teamwork assessment. Teamwork assessment is part of both management and education domain. So the hierarchical aggregate assessment field is a common field of education and business administration domains. This states the problematics origin of the hierarchical aggregate assessment where the assessment of teams with several levels of hierarchy has been mostly studied by management and information systems researchers while very few work has been done on several levels of hierarchy teamwork assessment in the education field even if complex assessment tasks with several levels of hierarchy could be performed in a classroom of professional training, as shown in Figure 5.

Figure 5.

Problematics origin of hierarchical aggregate assessment

A large amount of work and research have been done in the assessment field regarding teamwork assessment. Throughout the research and the produced literature, authors such as Sugrue, Seger, Kerridge, Sloane and Deane [7]; Volkov and Volkov [8]; Baker and Salas [9]; Zaccaro, Mumford, Connelly, Marks and Gilbert [11]; MacMillan, Paley, Entin and Entin [12]; Furnham, Pendelton and Steele [13]; Freeman and McKenzie [14, 15]; Ritchie and Cameron [16]; and Lurie, Schultz and Lamanna [17] performed researches and developed theories and assessment grids regarding the dynamics of teamwork with a single level of hierarchy that includes a single leader who runs one or more team members. So far, very few authors, scientists and researchers in the field of assessment teams produced research or theories regarding the assessment of teams with several hierarchy levels (Lesage, Raîche, Riopel & Sebkhi [18]; Lesage, Raîche, Riopel, Fortin & Sebkhi [19, 20]; Sebkhi, Raîche Riopel & Lesage, [21]). This problematic funnel is described in Figure 6.

Figure 6.

Problematics funnel of hierarchical aggregate assessment

The process of hierarchical aggregate assessment brings together team members into teams that include multiple levels of hierarchy where these people can occupy the hierarchical positions of president, team manager, team leader as well as team member, as shown in Figure 7. The structure of the team is in the form of a pyramid or an inverted tree representing an organizational chart in which each branch is a team which is a team member aggregate. The process of hierarchical aggregate assessment is the action of grouping team members together in a hierarchical organizational structure on several levels and then make an assessment process for each member of the team that is a leaf of the tree or a node of the organizational structure. The Internet application “Cluster” has implemented this data structure located in its MySQL [22] database, and its complex assessment task presentation engine in collaborative mode can perform assessment procedures for each team member or each node of the tree. So in one assessment task, the application can assess different objectives, skills, abilities and knowledge. This feature has not been implemented completely in other distance learning applications such as Moodle [23], Blackboard [24] and WebCT [24], and this statement defines the fundamentals of the problematics of this research.

Figure 7.

Hierarchical aggregate assessment process capabilities for simultaneous assessment of multiple skills

2.3. Available internet teamwork assessment applications

Online learning software (e-learning) and online assessment software (e-assessment) commercially available are Moodle [23], Blackboard [24] and WebCT [24]. These applications can implement collaborative learning via the Internet by the formation of virtual classrooms where a student may be a member of one or more working groups and may attend one or more classes. They have basic assessment features such as homework submission in electronic format by uploading files to be given to the teacher as well as possessing database repositories of many multiple-choice questions that are part of HTML autocorrecting tests. However, none of these applications have the data structure and software architecture to group or aggregate groups of individuals or teams of students to several hierarchical levels in order to achieve complex assessment tasks in collaborative mode as the “Cluster” application does.

Few of the applications mentioned in the literature is capable of simultaneous assessment of different skills and knowledge according to the hierarchical status of the learner in the same assessment task. The following authors studied peer assessment, but only for the assessment of the same skills and knowledge of team members having the same hierarchical status: Sugrue, Seger, Kerridge, Sloane and Deane [7]; Volkov and Volkov [8]; Baker and Salas [9]; Zaccaro, Mumford, Connelly, Marks and Gilbert [11]; MacMillan, Paley, Entin and Entin [12]; Furnham, Pendelton and Steele [13]; Freeman and McKenzie [14, 15]; Ritchie and Cameron [16]; and Lurie, Schultz and Lamanna [17]. The “Cluster” application data structure is designed to record the group organizational tree structure that contains the hierarchical levels, linking the team members together, while the Moodle [23], Blackboard [24] and WebCT [24] Internet application only allows them to record virtual classes without several levels of hierarchy.

Advertisement

3. Theoretical framework

3.1. General

The theories and research produced by the actual project are an extension of previous work made by Nance [25] that is using a similar aggregation process as the “Cluster” application to form teams with several levels of hierarchy for educational purposes to manage project teams in software engineering courses and also the work of Freeman and McKenzie [14, 15] on the development of the “SPARK” software application that is an Internet distance assessment system managing self-assessment and peer assessment made with assessment grids. Peer assessment is in level 5 of Krathwohl’s affective domain taxonomy (Legendre [1]; Lavallée [26]; Krathwohl, Bloom and Masia [27]). Competency assessment in the field of hierarchical aggregate assessment can be made with observation grids or competencies assessment grids (Hubert & Denis [28]; Jeunesse [29]) and also with portfolio (Allal [4]) that usually contains self-assessments (Endrizzi and Ray [30]).

The actual research is based on the development of the “Cluster” Internet application which implements the process of hierarchical aggregate assessment. This application is a presentation engine of collaborative mode complex assessment tasks in an authentic context. The development of the “Cluster” Internet application finds its theoretical foundations in (1) the complex assessment tasks (Louis & Bernard [31]; Tardif [32]), (2) authentic context assessment (Palm [33], p. 6; Louis & Bernard [31]; Wiggins [34, 35]; Hart [36]; Allal [4]; Rennert-Ariev [37]), (3) teamwork assessment (Baker & Salas [9]; Marin-Garcia & Lloret [38]), (4) collaborative work assessment (Swan, Shen & Hiltz [39]; Volkov & Volkov [8]; Boud, Cohen & Sampson [40]; MacDonald [41]; Swan, Shen & Hiltz [39]; Worcester Polytechnic Institute[42]), and (5) assessment grids (Durham, Knight & Locke [43]; Marin-Gracia & Lloret [38]) as well as self-assessment and peer assessment (Lingard [44]; Goldfinch [45]; Goldfinch & Raeside [46]; Northrup & Northrup [47]).

3.2. Definition of hierarchical aggregate assessment process in general terms

The hierarchical aggregate assessment is defined in general terms as a process that groups teams as well as a subfield of teamwork assessment in which teams have several levels of hierarchy and supervision (Lesage, Raîche Riopel & Sebkhi [18]; Lesage, Raîche Riopel, Fortin & Sebkhi [19, 20]; Sebkhi, Raîche Riopel & Lesage [21]). This assessment process with several levels of hierarchy and supervision in the field of education, that is one of the main theoretical contributions of this research project, has been named “hierarchical aggregate assessment”. This process includes the formation of teams with several levels of hierarchy, the display of exams or complex assessment tasks to the teams and also the dismantling of the teams for the next assessment task in teams, as shown in Figure 1.

3.3. Definition of hierarchical aggregate assessment process in education

In the education field, the process of hierarchical aggregate assessment is defined as a team grouping process and a teamwork assessment subfield. In this subfield, teams have several levels of hierarchy and supervision where team leaders that could be students are assessed by one or many group managers that could be other students, teachers or professors (Lesage, Raîche, Riopel & Sebkhi [18]; Lesage, Raîche, Riopel, Fortin & Sebkhi [19, 20]; Sebkhi, Raîche, Riopel & Lesage [21]).

3.4. Situation of the field of hierarchical aggregate assessment process in the mastery learning paradigm

The assessment process in the mastery learning paradigm wants to determine the level at which the educational objectives are mastered or attained (Legendre [1]). Bloom’s [48] cognitive level taxonomy of educational objectives allows to determine educational objectives by a statement describing knowledge, skill or performance and a description concerning the application of this knowledge, skill or performance. Bloom’s cognitive level taxonomy of comprehension, application, analysis and synthesis is considered to represent the most important goals of the education field. This constatation has provided a foundation to raise the complexity level of tests and teaching programs towards educational objectives that could be in the higher levels of Bloom’s taxonomy (Krathwohl [49]). According to some authors as Wiggins [34], traditional tests based on educational objectives are using out-of-context rote learning or open questions needing a few words for answers as an exam on multiplication tables. Those type of tests or exams are verifying if the students meet the criteria mentioned in the course curriculum.

The hierarchical assessment process is based on teamwork assessment. According to the mastery learning paradigm, the assessment process is realized by tests or exams that could contain items [49], questions and tests (De Ketele & Gérard [2]) and also work to accomplish [48]. As stated by the mastery learning paradigm, an exam or a test done in teams needs an accurate work or performance accomplished by a team at the end of a course or a study program [1]. Exam questions and learning objectives, concerning work or team performances, are included in the levels of Bloom’s cognitive level taxonomy. In the hierarchical assessment process, the tests and exams are done in teams, so the persons taking part in the team exam can assess quantitatively and qualitatively the work done in teams to determine if the production or the performance meets the determined criteria; this type of assessment being part of level 6 of Bloom’s [48] cognitive level taxonomy is named “evaluation”. In some exams taken in teams, the persons taking part in the exam could do self-assessment and peer assessment. The peers’ assessment process is part of level 5 of Krathwohl’s [27] affective level taxonomy which interprets value or belief system classification [01, 26, 27].

3.5. Situation of the field of hierarchical aggregate assessment process in the competency-based approach paradigm

In the competency-based paradigm, the execution of a competency is based on resource mobilization to solve a complex situation (Van Kempen [3]). Competencies include the grouping of skills, attitudes and knowledge allowing a person to perform tasks (Bastiaens [50]). The competency-based approach paradigm replaces classical tests based on objectives by assessment tasks or situations that include social interaction (Allal [4]). Assessment tasks are evaluation tools that use or mobilize resources to solve a problematic situation or to perform a complex task. These tools are used to develop competencies with complex tasks allowing knowledge synthesis (Saskatchewan Professional Development Unit [51]; Olivier [52]; Louis and Bernard [31]; Tardif [32]; Van Kempen [3]; De Ketele & Gérard [2]).

The objectives of learning and assessment situations are to develop disciplinary and transversal competencies and to assess all students that must prove that they can resolve a problematic situation with their knowledge and skills (Bibeau [53]). The aim of competency assessment is to verify if the student has well used all available resources to accomplish a task successfully. During this process, students should be involved in their own assessment and perform their self-assessment (Jeunesse [29]). The competency formative assessment process is based on interactive regulation that comes with student-teacher interaction, interactions with peers and learning tools. The learner can imply himself in the assessment process with self-assessment, peer assessment and co-evaluation (Allal [4]). The hierarchical assessment process in the competency-based approach paradigm is the implementation of complex assessment tasks in teams. These tasks could include summative assessment that are performance or tasks to accomplish either individually or in teams and also includes formative assessment that is produced by self-assessment and peer assessment of team members. The competency-based approach in the hierarchical aggregate assessment field could be performed with observation grids or competency assessment grids as shown in Figures 21, 22 and 23 (Hubert & Denis [28]; Jeunesse [29]) and also with portfolio assessment (Allal [4]) that usually contains self-assessment (Endrizzi & Ray [30]).

3.6. Previous work and similar available existing internet applications

The current research project finds its origins and its theoretical framework in other previous research and through other distance assessment Internet applications that have been developed with a research and development methodology (R & D). These applications are SPARK developed by Freeman and McKenzie [14, 15] and Willey and Freeman [54, 55]; MLE developed by Marshall-Mies, Fleischman, Martin, Zaccaro, Baughman and McGee [56]; Mega Code developed by Kaye and Mancini [57]; and the application that is most similar to the current research project is a collaborative work management Internet application developed by Nance [25].

SPARK [14, 15, 54, 55] is a remote rating system that calculates the results of self-assessment and peer assessment grids to determine the final grade of engineering students on projects during practical work in engineering. This primarily detects the team members who have not done their fair share of work by giving poor performance in their team by letting others do their work for them.

MLE [56] is an application that predicts and assesses the leadership potential of high-level military managers such as colonels and generals with complex assessment tasks that are case studies and resolution of war scenarios.

Mega Code [57] is a software application used in the field of medicine and that is a cardiac arrest simulator. This application is used to assess the performance of resident doctors and nurses when they hold the role of leader of a resuscitation team who treated the case of patients who suffered a cardiac arrest according to the five main roles that are (1) the doctor who is in charge of the team, (2) the controller of respiration, (3) the head of the defibrillator, (4) the head of chest compressions and (5) the head of injections and intravenous infusions. The assessment of the team leader is made using an assessment grid that checks the two main aspects of cardiac resuscitation that are the team effort and the process and directions given to the members of the team by the team leader to resuscitate the patient.

The collaborative work management Web application developed by Nance [25] is used by students of engineering and computer science faculties. This application uses a multiple-level aggregation process for the grouping of teams that is similar to the aggregation process implemented in this research and in the “Cluster” Internet application. Nance’s research [25] consists of the implementation of an Internet-based collaborative work application that is used to manage and assess the projects and the productions of engineering and computer science students. This application has the features needed to group students in teams that have multiple levels of hierarchy and supervision including team leaders and project managers (bosses) and project administrators (bosses of bosses (BOB)) supervising several project managers in the field of engineering and computer science. Nance’s application collaborative work implementation is based on electronic mail (E-mail) and a discussion forum website.

3.7. Link between hierarchical aggregate assessment applications and Management Information Systems (MIS)

A management information system (MIS) software application “uses computer equipment and software, databases, manual procedures, models for analysis, planning, control and decision-making” (Davis, Olson, Ajenstat & Peaucelle [58]). These systems may contain information about the function, department and the hierarchical position of the members of the organization that are stored in hierarchical databases (Burch & Grudnitski [59]; Davis & Olson [60]; Davis, Olson, Ajenstat & Peaucelle [58]; Laudon & Laudon [61]; Laudon, Laudon & Brabston [62]). Some authors such as Kanter [63] indicate that the employee file can be sorted by order of position or assignment to identify employees who have the same hierarchical position. A database diagram illustrating an employee’s position is shown in Figure 8. A hierarchical aggregate assessment software application is therefore a management information system where the employees to manage are students who have a hierarchical position.

Figure 8.

The record of an employee in a management information system database [62]

In the actual paradigm, there is a major difference between distance assessment systems and management information system software applications. A distance assessment system software application is a question bank repository stored in a database that usually presents the same questions or the same assessment tasks to all the students to assess the same skills and knowledge and there is no hierarchical relationship or hierarchy levels between the students. A management information system (MIS) is a software application that stores and processes management data and information on employees to produce information used for decision-making. The assessment data that a management information system produces and computes for the employees are usually sales data and production performance. Management information systems are able to record the hierarchical relations and positions of the employees, while distance assessment applications cannot.

In the hierarchical aggregate assessment paradigm, there is only a slight difference between hierarchical aggregate assessment applications and management information systems because both records the hierarchical relations and positions of the employees. The only difference is that the management information system processes management data, while the hierarchical aggregate assessment software application processes assessment data, course material, question banks and complex assessment tasks with several levels of hierarchy. Hence, any management information system could be modified to record course material and question banks to present complex assessment tasks with several levels of hierarchy. So the modified management information system has now been added hierarchical aggregate assessment capability and is equally now a hierarchical aggregate assessment software application, as shown in Figure 9.

Figure 9.

Link between hierarchical aggregate assessment applications and management information systems (MIS)

Advertisement

4. Methodology

4.1. Choice of methodology and software application design process

The implementation of teamwork complex assessment tasks with several levels of hierarchy is the implementation concept that was at the beginning of the research and development process (R & D) used to develop an Internet software application named “Cluster” that will be an educational product. The educational research and development model used is the one implemented by the authors Harvey and Loiselle [6]. The research project’s objectives are to develop an Internet multilevel teamwork assessment application in accordance with the Harvey and Loiselle [6] model and to test the application with high school students and Canadian army cadets that will assess his usability with the Questionnaire for User Interaction Satisfaction (“QUIS”) [64]. The Harvey and Loiselle [6] research and development process used in the present research project will give two results, the first result will be the “Cluster” Internet application and the second result will be the theoretical statement of the hierarchical aggregate assessment process for his acceptance by the scientific community.

The actual research project is the development of an educational tool that implements the hierarchical aggregate assessment process. Richey and Nelson [65] states that the development of a software application that will be used as an educational tool is part of the research and development (R & D) methodology for educational products. The development of the “Cluster” Internet application and its use by students and teachers will place this research in the paradigm of the research and development (R & D) methodology with mixed data analysis using qualitative and quantitative methods. The qualitative aspect is in the field of the interpretivist epistemology paradigm [66, 67] and used primarily to determine if users like to use the software, resistance-to-change factors as well as the assessment of the proper functioning of the software. The quantitative aspect of the research project, for its consideration, is in the field of the positivist epistemology paradigm [66, 67] and used to assess the increase in knowledge and the course success and dropout rate of students.

Regarding the choice of a research and development model, several authors have proposed models or developed research approaches such as Borg and Gall [68], Nonnon [69], Cervera [70], Van der Maren [71] and Harvey and Loiselle [6]. In all cases, these models include the phases of (1) problem analysis, (2) project planning, (3) production or development, (4) testing, (5) evaluation and (6) review [10]. The model chosen is the one developed by Harvey and Loiselle [6] because it is newer than Nonnon’s model [69, 72], and it summarizes all stages of the research and development models of the previously cited authors. The research and development model used in the current research project is the model of Harvey and Loiselle [6] which includes five phases: (1) determination of the cause of the research, (2) determination of the theoretical background, (3) determination of methodology, (4) implementation or development of the educational product and (5) production of the results, as shown in Figure 10.

Figure 10.

The research and development model of Harvey and Loiselle [6]

The research and development methodology is similar to the technical development of durable and consumable products used in engineering. Loiselle [10] defined the research and development methodology as an iterative process that involves seven steps that are (1) the preliminary analysis; (2) the prototype design and evaluation; (3) testing phase; (4) evaluation, revision and correction phase; (5) publication of results phase; (6) distribution phase; and (7) marketing phase. If the developed product has some lacks, failures or defects in the final stages of the development process such as evaluation, revision and correction, publication of results, distribution and marketing phases, the process returns to the analysis phase to find a solution to correct the defects of the product, as shown in Figure 11. The first functional tests or alpha tests were conducted by the authors of this chapter to ensure that the “Cluster” Internet application was ready to use by teachers and students. Once the functional tests were completed, the second series of tests or beta tests were performed by Mrs. Dalila Sebkhi’s high school students [18, 19, 20, 21] during her third education bachelor internship where she taught geology for high school students of the Montreal School Board (CSDM). Then after, other beta tests were made by the authors through the distance learning implementation of map using for Canadian army cadets with navigation patrols in teams [18, 19, 21]

Figure 11.

General research and development process (R & D) [10]

4.2. The testing of the “Cluster” internet application with high school students

The “Cluster” Internet application has first been tested with high school students during teaching assignments III and IV of Mrs. Dalila Sebkhi [18, 19, 20, 21]. These teaching assignments are part of the Université du Québec à Montréal’s bachelor in education curriculum. This application has been used during Mrs. Dalila Sebkhi’s teaching assignment III as an educational tool used as a teaching aid to support the learning of high school students for science and technology classes in the “La Voie” high school of the “Commission Scolaire de Montréal (CSDM)”.

The experimental subjects were 113 (N = 113) 9th grade high school students divided into four classes. The course studied was a geology course that included sections on the solar system, the relief and also the rocks and minerals. The course content has been converted to electronic format and placed in the database of the “Cluster” Internet application so that students could access the course material at home outside school hours. This experiment only used qualitative methods and was based on the analysis of the testimonies of students and school officials who used the application. Mrs. Sebkhi would also have wanted to use the “Cluster” Internet application during her teaching assignment IV that included 118 11th grade high school students of the « St-Luc » high school divided into four classes which also belong to the “Commission Scolaire de Montréal (CSDM)”. The course studied was thermodynamics. However, this experiment did not take place due to resistance to change because Mrs. Sebkhi’s teaching assignment IV directors felt that too much time would be needed for students to learn to use the “Cluster” Internet application effectively.

4.3. The testing of the “Cluster” internet application with canadian army cadets

The “Cluster” Internet application was also experimented by the Royal Canadian Army Cadets with an experimental group of 27 young army cadets (N = 27) and with a control group of 12 cadets (N = 12) [18, 19, 20, 21]. All experimentation subjects came from two cadet corps of the Quebec province in Canada and had an average of 14 years of age. The current study was a military map-using course entitled “PO 122 – Identify a location using a map”. The theoretical content of the course is found in the book “A-CR-CCP-701/PF-001, Green Star, Instructional guides” published by the staff of the Royal Canadian Army Cadets [73].

Both groups used in the experimentation had to study topography and map using to perform navigation patrols in teams. The experimentation group had to use the “Cluster” Internet application to study map using, while the control group has also to study map using but in a classroom with traditional teaching methods that are Canadian force instructional techniques. Subjects in the experimentation group were from the cadet corps “2567 Dunkerque” from the city of Laval, while subjects in the control group were part of the cadet corps “2595 St-Jean” from the city of Saint-Jean-sur-Richelieu. The cadet corps “2595 St-Jean” resides in the buildings of the Royal Military College Saint-Jean.

The classes given were part of a topography and map-using course that included five theoretical lessons that were (1) the different types of maps, (2) marginal information found on a map, (3) map symbols and conventional signs, (4) map contour lines and (5) four-, six- and eight-digit coordinates. The course material has been converted to electronic format and placed in the “Cluster” Internet application database. The topography and map-using course was divided into two parts: a first theoretical part where the experimentation subjects were studying the course material and a second practical part where the subjects were patrolling in the training area between two eight-digit coordinates given by the experimenter. Subjects or students in the control group had to study with traditional teaching manners the theoretical part in a classroom with a teacher, who in the military is called an instructor. Subjects in the experimental group, for their part, had to study the theoretical part of the group at home using the “Cluster” Internet application. However, both groups had to do the practical part of the course that consisted in navigation patrols in teams in training areas to prove the validity of the learning in presence and the distance learning on the Internet.

The validity of the experimentation was conducted using mixed methodology grouping tools of quantitative and qualitative methods. The experiment used qualitative research methods such as observation, interview and post-exercise report analysis. This is to determine whether the application was easy to use, the accuracy of training and if the test subjects had enjoyed using the “Cluster” Internet application. Usability and user interface conviviality factors are crucial to mitigate the effect of resistance to change during the implementation of software that will be used to make a transition from traditional education in class to e-learning.

Quantitative research methods used in the experiment were used to determine the levels of user interface conviviality and the influence of the “Cluster” Internet application on student learning rates. Quantitative instruments used in the experiment were (1) initial knowledge exam, (2) HTML auto-correcting objective exams, (3) work to submit by upload in electronic format, (4) final knowledge exam, (5) electronic self-assessment forms, (6) electronic peer assessment forms, (7) course module confirmation examinations and (8) QUIS questionnaire (Questionnaire for User Interaction Satisfaction) [64, 74]. Formative assessment is given by the students of the course using electronic forms of self-assessment and peer assessment, while summative assessment is provided by HTML questionnaires, homework to submit and course module confirmation examinations and also by the mark given by the teacher or evaluator for the practical part of the course consisting of navigation patrols in teams. These complex assessment tasks in collaborative mode consist of navigation patrols in teams using a topographic map. The results of the initial and final knowledge tests are not included in the course final result. The results of the initial and final knowledge tests are only used for the purpose of establishing research findings and conclusions regarding the increase of knowledge for both experimental and control groups. The QUIS questionnaire is used to quantitatively assess the levels of user interface conviviality of the computer application and the satisfaction level of the users, as shown in Figure 12.

Figure 12.

A section of the QUIS questionnaire [64, 74]

The curriculum or course progression for a student is (1) to take the initial knowledge exam, (2) to achieve the five course modules performed in class for the control group and at distance with the “Cluster” Internet application for the experimental group that include a test based on a HTML objective exam at the end of each module that accounts for 50 % of the final mark, (3) to participate in at least three navigation patrols that will count for the other 50 % of the final grade in which the student successively held the team member, team leader and group administrator assignments and (4) to complete the self-assessment and peers evaluation forms after each patrol and (5) the teacher or the assessor is responsible for assessing the patrol team and will assign each student a mark for all the work he did during patrols and that will count for the other 50 % of the final grade and (6) the student will write the final or end-of-course knowledge exam.

Advertisement

5. Results

5.1. General

The “Cluster” application is now fully functional and resides at the address http://eval.uqam.ca/cluster/. The application is relatively easy to use and constitutes a software-programmable shell to implement courses. To create courses in assessor mode, the teacher needs the course material, the course schedule, the assessment tasks definition, the student’s names and the team’s organogram. The teacher has to enter all these data in the application’s database to implement a course. Once the course is started, the teacher can form the students in teams and assess individual and teamwork tasks. To follow a course, the student has to login into the application. After the login, the student has to select the course he wants to follow. Once entered in the course, the student can study the course material, write exams, submit homework, participate in assessment tasks and submit self-assessment and peer assessment. The “Cluster” Internet application experimentation results with high school students and army cadets stated resistance to change by the users and the need to implement some software modifications to the application that were the addition of (1) a field identifying the name of the student group or class to the database, (2) return buttons to avoid the students to get stuck in the interface and course modules and (3) a course progression matrix for each student group or class.

The actual doctoral project aims to computerize the assessment of teams on several hierarchical levels using a research and development methodology of educational products. Since the process of research and development in education not only gives educational products, but also theories, this research will produce the following results: (1) the definition of the hierarchical aggregate assessment process, (2) the “Cluster” Internet application, (3) considerations and changes caused by an experiment on high school students and (4) considerations caused by experimentation on army cadets.

5.2. Hierarchical aggregate assessment process

The process of grouping students into teams with several hierarchical levels that is implemented in the “Cluster” Internet application was the object of theoretical considerations of the research and development process that led to the statement of its definition. The actual doctoral project researchers would like that the term hierarchical aggregate assessment be accepted and recognized by the scientific community as a whole because this process has always existed and occurred in large organizations.

5.3. The “Cluster” internet distance assessment application

The “Cluster” distance assessment Internet application (e-assessment) is a collaborative mode presentation engine in authentic context. This computer application is developed in PHP and supported by a MySQL database. Phases of preliminary analysis and functional analysis of the software development process of the “Cluster” Internet application were done by the CDAME software analysts. The application development with the PHP programming language and also the software application database management system (DBMS) modelling and design in MySQL [22] were done by Frédérick Fortin [19, 20], information systems analyst and a programmer for the “LabMECAS (Laboratoire mobile pour l'étude des cheminements d'apprentissage en sciences (FCI))” [75]. The software architecture of the “Cluster” Internet application is shown in Figure 13.

Figure 13.

“Cluster” Internet application software architecture

The database management system of the “Cluster” Internet application is able to manage (1) student data, (2) course material, (3) team formation, (4) courses, (5) formative and summative assessments and (6) hierarchical relationships between team members who may have several levels. In the data structure, a course is broken down into modules, and modules include tasks that may have assessment or not. This assessment can be individual or in teams. Individual assessment consists of either HTML objective questionnaire examinations or homework to submit in electronic format with the system’s upload functionality. Assessment tasks in teams include formative assessments that are self-assessment and peer assessment and also summative assessment that is the mark given to the team by the assessor for a production, task or performance. The database architecture of the “Cluster” Internet application is shown in Figure 14.

Figure 14.

“Cluster” Internet application database architecture

The application has two mutually exclusive operating modes: student mode and the administrator or assessor mode. In fact, the system does not allow an individual with an administrator or assessor status to study the course material as well as to participate in an assessment task as a team member. Furthermore, the system does not allow an individual with student status to change over the databases and students or to execute some system administrator commands. In the student mode, a user cannot give summative assessments and assess homework as well as team tasks. The mode of application is determined when connecting to the system with the login page when the system recognizes if the username belongs to a student, an assessor or an administrator. The home page contains the login parameters entry fields for username and password and is shown in Figure 15.

Figure 15.

« Cluster » Internet application login page

The student mode is only used by students or candidates on distance courses given with the “Cluster” Internet application. Student mode allows candidates on courses to (1) study the course material; (2) check out the curriculum record sheet to know what course modules are done and their progression through course modules; (3) perform HTML examinations; (4) submit homework; (5) be part of a team to perform a complex evaluation task in teams; (6) occupy a hierarchical position in the team as a team member, team leader and group administrator; and (7) fill in forms of self-assessment and peer assessment. Once the students have begun a session in the application, they can choose the course they want to study if they are registered in several courses with the form shown in Figure 16.

Figure 16.

Course selection screen

Once the student has chosen the course he wants to study, the user interface drop-down menu allows access to the modules of the course. The course module selection menu is shown in Figure 17.

Figure 17.

Course module selection menu

The menu allows the student to study the course material sequentially from the first module to the last. An application functionality prevents the student from browsing or to navigate randomly in the course modules. The student is only allowed to study the course material in course modules from the first to the last, the last module being the end of the course. The application displays the course material for the student to be able to read it on the screen. When displaying the course material, a pop-up menu allows the student to save or print the displayed course material for future revisions. The course material is displayed using the computer screen shown in Figure 18.

Figure 18.

Course material display screen

The student can consult at any time the curriculum record sheet that shows the progress of students in the course modules and tasks. The computer screen representing the curriculum record sheet is shown in Figure 19.

Figure 19.

Curriculum record sheet display screen

Figure 20.

HTML objective questionnaire

The “Cluster” Internet application has two assessment modes: the individual assessment and the assessment in teams or teamwork assessment. The individual assessment will be processed with HTML objective exams and homework submission in electronic format by an upload function, while the teamwork assessment is done by the teacher and the assessor that can observe the team or assess a performance or a production with a mark. The HTML objective questionnaire is shown in Figure 20.

Performances, work and productions of the students will be submitted using a standard upload computer screen shown in Figure 21.

Figure 21.

Standard upload computer screen

The “Cluster” Internet application is able to assess different knowledge, skills, productions and performances simultaneously in the same assessment task in teams. Hence, a student participating in an assessment task in teams can occupy team member, team leader and group administrator hierarchical positions. When the student completes an assessment task, he must complete the self-assessment and the peer assessment forms. It is therefore necessary that the self-assessment and peer assessment forms have different assessment criteria based on the hierarchical position of the assessed student that could be a team member, team leader or group administrator. The team member assessment form is shown in Figure 22.

Figure 22.

Team member assessment form

The team leader assessment form is shown in Figure 23.

Figure 23.

Team leader assessment form

The group manager assessment form is shown in Figure 24.

Figure 24.

Group manager assessment form

The administrator or assessor mode is the operating mode used by system administrators, teachers, assessors as well as distance learning courses developers on the Internet (e-learning) to (1) manage and modify the student database, (2) manage and modify the course material database, (3) mark the students’ homework submitted in electronic format, (4) assess the performance of the students in teams, (5) group students into teams and (6) assign team members hierarchical positions as team member, team leader and group manager in order to implement the tree structure made by the hierarchical aggregation of team members. The student management computer screen is shown in Figure 25 and allows the teacher or the assessor to create a new student as well as to modify or delete the record of an existing student.

Figure 25.

Student management form

The course task management form is shown in Figure 26 and allows the teacher or the evaluator to create a new course task as well as modify or destroy an existing task from the course material database.

Figure 26.

Course task management form

The teacher or assessor may mark individual homework or assignments submitted in electronic format and write comments about a student’s performance with the work or performance assessment form shown in Figure 27. This form is only used by the teacher or assessor for summative assessment purposes to give marks to work uploaded by students.

Figure 27.

Student’s individual work or performance assessment form

Figure 28 is a computer form that allows the teacher or the assessor to perform teamwork assessment. In fact, during a teamwork assessment task, each student is assessed twice: the student first receives marks or assessment data that is a formative assessment concerning his individual performance as team member, team leader and team or group manager. The student also receives a score that is a summative assessment for the performance he gives during the teamwork assessment tasks and for his individual performances that are homework submitted in electronic format and HTML exams. The teacher or assessor can assess each student performance during a teamwork task with the team member assessment form shown in Figure 28, which is the same form used by students to give self-assessment and peer assessment previously shown in Figure 22. This assessment form then has two functions: first, it is used for formative assessment by students who use them for self-assessment and peer assessment. Secondly, it is used to make summative assessment by teachers or assessors to mark the individual performance of the student in his team.

Figure 28.

Team member assessment form

The software application team member individual assessment screen is shown in Figure 29.

Figure 29.

Team member individual assessment screen

Once all individual formative assessment is done by team members with the completion of self-assessment and peer assessment forms, a data entry form shown in Figure 30 is presented to the assessor to enter the mark or the score for the assessment of the task done in a team.

Figure 30.

Screen for the assessment of a task in a team

The teacher or the assessor gives summative assessment to the student by observing his team performance based on his hierarchical position, which can be either as team member, team leader or group manager. The assessment criteria on the assessment forms are different depending on the hierarchical position occupied by the student, as shown by Figures 22, 23 and 24. This feature is a direct implementation of the problematics of teamwork assessment with several hierarchical levels. This functionality is currently only implemented in the “Cluster” Internet application and is not found in any other e-learning and e-assessment Internet applications such as Moodle, WebCT and Blackboard.

During the teamwork assessment process, the teacher or the assessor has to produce both formative and summative individual assessment and teamwork assessment. These assessments will be used to mark the team productions and to determine the student’s final grade for a given course. To assess a student and assign grades, the teacher or the assessor can consult the “Cluster” Internet application database and retrieve the student’s self-assessments as well as all the peer assessment using the forms shown in Figures 22, 23 and 24. The computer screen that displays all of the results of self-assessments and peer assessments for a given student is shown in Figure 31.

Figure 31.

Self-assessment and peer assessment display screen

The course final grade is computed by (1) the sum of all the individual scores that includes HTML exams and homework to submit in the course modules and (2) the sum of all scores assigned by the teacher or the assessor to the student for the tasks he performed as a team member. Finally, the main innovation of the “Cluster” Internet application at the origin of current doctoral project is the aggregation function whose tree data structure is implemented into the application’s database and thereby allows the grouping of students into teams with multiple hierarchical levels. This feature allows the system to assign the student hierarchical functions such as team member, team leader and group manager. The aggregation function is accessible from the main menu of the application that is shown in Figure 32.

Figure 32.

Aggregation menu for team formation

The aggregation functionality implemented in the “Cluster” Internet application provides a solution to the problem of the current research project concerning the implementation of an assessment process for the teams with several hierarchical levels that is less implemented in Moodle, WebCT and Blackboard. The form of the “Cluster” Internet application that implements the aggregation process that groups teams of students with levels of hierarchy and assigns team members as team leader, team member and group manager is the computer screen shown in Figure 33. The form enables the teacher or the assessor to begin the aggregation process to group students in teams. This process builds the multilevel tree structure stored in the MySQL database application.

Figure 33.

Aggregation process and team formation screen

5.4. Experimentation with high school students

The testing of the “Cluster” Internet application performed on high school students by Mrs. Dalila Sebkhi [18, 19, 20, 21] were the first beta tests used to experiment the application on a large population of over 100 students (N > 100). Alpha tests were done before Mrs. Dalila Sebkhi’s experimentation by the CDAME researchers [18, 19, 20, 21]. In this experiment, the “Cluster” Internet application was used by high school students of the province of Quebec as an alternative method to teach geology courses. The results of the experiment were purely qualitative and were based on Mrs. Sebkhi’s observations during the experiment where students used the application in their geology classes. Several students who used the application “Cluster” and some directors of the Montreal School Board argued that the application user interface was too rigid and not friendly enough for students who were teenagers from 12 to 16 years of age.

The high school students wanted the applications’ user interface to make more use of multimedia elements such as videos and animated graphics so that the course would be more like a video game with avatars as in the “Mecanika” application implemented by François Boucher-Genesse [76] rather than the actual “Cluster” Internet application’s basic drop-down menus user interface. However, for some students, learning to use the “Cluster” Internet application was simple and easy. These students did not had any problem to study the course material, review all the course modules and take the geology course exams placed at the end of course modules while the less talented students had experienced various problems when using the "Cluster" Internet application such as (1) resistance to change, (2) losses of usernames and passwords, (3) errors while filling the HTML exams, (4) being lost in navigation when studying the course material, (5) impossibility to go back in the user interface navigation if the course material is not understood or saved and that the student wants to regain access to the course materials or to the previous sections and (6) difficulty for teachers or course assessors to keep track of progress while performing modules and examinations for groups or classes having a large number of students.

Mrs. Sebkhi’s high school students faced the described problems; she therefore requested that four modifications could be made to the “Cluster” Internet application user interface [18, 19, 20, 21]. These changes were implemented a few months after the end of his teaching assignment III so that Mrs. Sebkhi could use the new functionalities of the application to the start of her teaching assignment IV. The first modification shown in Figure 34 was the addition of a field in the database to identify the group or the student’s class so that all students in the database are divided into classes or groups.

The second modification is the implementation of a back button allowing the student to be able to return to the previous module or chapter, as shown in Figure 35.

The third modification shown in Figure 36 is the implementation of a form to access the curriculum record sheet of all the students registered in the “Cluster” Internet application database. This form will allow the teacher or the assessor to access the curriculum record sheet of a given student to know his progression into the course modules without having to open a session (login) into the account of the student.

The fourth modification shown in Figure 37 is the implementation of a form that displays a matrix that shows the progress in the course modules for all students in a class or a group.

Figure 34.

Addition of a field for the group or the class of the student

Figure 35.

Implementation of a button to return to the previous module

Figure 36.

Curriculum record sheet access screen

Figure 37.

Student progress matrix screen

5.5. Experimentation with Canadian army cadets

The results of the experiment of the “Cluster” Internet application with Canadian army cadets are shown in Table 1 [18, 19, 20, 21].

Experimentation
group
Control
group
Population (N) 27 12
Topography course pretest 12.81 % 7 %
Topography course post-test 63.40 % 55 %
Topography course overall score 83.45 % 66.15 %
Knowledge increase rate 50.59 % 48 %
Number of candidates that has succeeded the course 6 10
Course abandon (dropout) 21 2
Success rate 22 % 83 %
User interface satisfaction rate
(QUIS)
- Liked : user friendliness
- Disliked :
Feedback, terminology and resistance to change
Not applicable

Table 1.

Experimentation of the “Cluster” on Canadian army cadets for navigation courses in teams using the map

Advertisement

6. Discussion

The current research project produced three main results under the research and development methodology: (1) the theory describing the process of hierarchical aggregate assessment, (2) the “Cluster” Internet application and (3) data, results and conclusions regarding the testing of the “Cluster” Internet application with army cadets during navigation patrols in teams. The theories describing the process of hierarchical aggregate assessment are now submitted to the scientific community through numerous publications [18, 19, 20, 21] so that the term “hierarchical aggregate assessment” will be internationally recognized by the scientific community. Following a first iteration in the research and development process, the “Cluster” Internet application has undergone a first set of amendments that has been proposed by Mrs. Dalila Sebkhi during her teaching assignment III at the Université du Québec à Montréal (UQAM). These results were presented and discussed in the « Results » section of this chapter, and the « Cluster » Internet application is now fully operational. Although the experiment is over, the organization of the army cadet has found useful “Cluster” Internet application in the cadet movement to provide distance learning and help for cadets with learning disabilities and to help late entry cadets of 15 to 18 years of age to progress faster in their career. This application is now used by the cadets to provide distance courses on topography, navigation patrols, instructional techniques and general military knowledge. The results for the testing of the application “Cluster” by the army cadets demonstrate that the increase of knowledge produced with the “Cluster” Internet application is 50.59 %, an increase which is almost identical to that produced by the traditional classroom teaching methods that is of 48 %. This similarity of percentages for the increase of knowledge in both cases could be explained by the “Clark [77]-Kozma [78] debate” where Clark (Clark, 1983, p 44.) states that “the media are only a vehicle transporting knowledge and do not influence knowledge”.

However, the success rate for the learning of topography using the “Cluster” Internet application is only 22 % compared to the control group which is 83 %. The success rate of 22 % produced by distance learning can be explained by the fact that many of the cadets in the experimental group were having learning disabilities. Some of the major drawbacks of distance learning are to leave the student alone in his learning process without being in the classroom and lacking the presence of a teacher or colleagues to help him. Very often, students with learning disabilities registered in distance courses became confused by the lack of classroom dynamics that destroys motivation and desire to learn.

Advertisement

7. Conclusion

The actual research project wants that the term “hierarchical aggregate assessment” will be accepted and recognized by the entire scientific community. The process of hierarchical aggregate assessment has been used everywhere and throughout the ages without any researcher or scientist having the idea to define this process by a name or a term. One of the goals of the present research project is to resolve this issue by proposing the term “hierarchical aggregate assessment”. The work done in this research was to implement this process in the areas of education, assessment and information technologies (IT). Further work and future research performed by the CDAME researchers will focus on (1) improving the user interface in the fields mentioned by the “QUIS” questionnaire that are feedback, terminology and resistance to change, (2) the implementation of the process of hierarchical aggregate assessment in the field of management and (3) the determination of the “Cluster” Internet application influence on knowledge increase, user satisfaction and student success rates.

References

  1. 1. Legendre, R. (2005). Dictionnaire actuel de l’éducation (3rd Edition). Montréal: Guérin.
  2. 2. De Ketele, J. M., & Gérard, F.-M. (2005). La validation des épreuves d’évaluation selon l’approche par les compétences. Mesures et Évaluation en Éducation, 28 (3), 1-26.
  3. 3. Van Kempen, J. L. (2008). Pourquoi a-t-on développé les compétences à l'école? [On Line]. Union des Fédérations d’Associations de Parents de l’Enseignement Catholique (UFAPEC). Access: http://www.ufapec.be/nos-analyses/pourquoi-a-t-on-developpe-les-compétences-a-l-ecole/.
  4. 4. Allal, L. (2002). Acquisition et évaluation des compétences en milieu scolaire. In J., Dolz, & E., Ollagnier (Eds.), L’énigme de la compétence en éducation (p. 77-94). Bruxelles: de Boeck.
  5. 5. CDAME. (2013). Site Internet du Centre CDAME (Collectif pour le Développement et les Applications en Mesure et Évaluation) [On Line]. Access: http://www.cdame.uqam.ca.
  6. 6. Harvey, S. & Loiselle, J. (2009). Proposition d’un modèle de recherche développement. Recherches qualitatives, 28 (2), 95-117.
  7. 7. Sugrue, M., Seger, M., Kerridge, R., Sloane, D. & Deane, S. (1995). A prospective study of the performance of the trauma team leader. The Journal of Trauma: Injury, Infection and Critical Care, 38 (1), 79-82.
  8. 8. Volkov, A. & Volkov, M. (2007). Teamwork and assessment: A critique. e-Journal of Business Education & Scholarship of Teaching, 1, 59-64.
  9. 9. Baker, D. P. & Salas, E. (1992). Principles for measuring teamwork skills. Human Factors, 34, 469-475.
  10. 10. Loiselle, J. (2001). La recherche développement en éducation: sa nature et ses caractéristiques. In D. M. Anadón, & M. L'Hostie (Eds), Nouvelles dynamiques de recherche en éducation (p. 77-97). Québec: Les Presses de l'Université Laval.
  11. 11. Zaccaro, S. J., Mumford, M. D., Conelly, M. S., Marks, M. A. & Gilbert, J. A. (2000). Assessment of leader problem-solving capabilities. Leadership Quarterly, 11 (1), 37-64.
  12. 12. MacMillan, J., Paley, M. J., Entin, E. B. & Entin, E. E. (2004). Questionnaires for distributed assessment of team mutual awareness. In N. A. Stanton, A. Hedge, K. Brookhuis, E. Salas, & H. W. Hendrick (Eds.), Handbook of human factors and ergonomic methods. Boca Raton: Taylor and Francis.
  13. 13. Furnham, A., Steele, H. & Pendelton, D. (1993). A psychometric assessment of the Belbin Team-Role Self-Perception Inventory. Journal of Occupational and Organizational Psychology, 66, 245-257.
  14. 14. Freeman, M. & McKenzie, J. (2000). Self and peer assessment of student teamwork: Designing, implementing and evaluating SPARK, a confidential, web based system [On Line]. In Flexible learning for a flexible society. Proceedings of ASET-HERDSA 2000 Conference. Toowoomba, Qld, 2-5 July. ASET and HERDSA. Access: http://www.aset.org.au/confs/aset-herdsa2000/procs/freeman.html.
  15. 15. Freeman, M. & McKenzie, J. (2002). SPARK, a confidential web-based template for self and peer assessment of student teamwork: Benefits of evaluating across different subjects. British Journal of Educational Technology, 33 (5), 551-569.
  16. 16. Ritchie, P. D. & Cameron, P. A. (1999). An evaluation of trauma team leader performance by video recording. Australian and New Zealand Journal of Surgery, 69, 183-186.
  17. 17. Lurie, S. J., Schultz, S. H. & Lamanna, G. (2011). Assessing teamwork: A reliable five-question survey. Family Medicine, 43 (10), 731-734.
  18. 18. Lesage, M., Raîche, G., Riopel, M. & Sebkhi, D. (2013). Le développement d’une application Internet d’évaluation hiérarchique des apprentissages (évaluation agrégée) selon une méthodologie de recherche développement (R & D). Association Francophone Internationale de Recherche Scientifique en éducation (AFIRSE).
  19. 19. Lesage, M., Raîche, G., Riopel, M., Fortin, F. & Sebkhi, D. (2014). An E-Assessment Website To Implement Hierarchical Aggregate Assessment. ICCSSE 2014: International Conference on Computer Science and Software Engineering, Rio de Janeiro, Brazil.
  20. 20. Lesage, M., Raîche, G., Riopel, M., Fortin, F. & Sebkhi, D. (2014). An E-Assessment Website to Implement Hierarchical Aggregate Assessment. World Academy of Science, Engineering and Technology, International Science Index 86, 8 (2), 925-933.
  21. 21. Sebkhi, D., Raîche, G., Riopel, M. & M. Lesage. (2013). Une première mise à l’essai d’une application Internet d’évaluation hiérarchique des apprentissages (évaluation agrégée) avec des élèves du secondaire dans le cadre des stages III et IV de l’UQAM en accord avec l’approche par compétence du Ministère de l’Éducation des Loisirs et des Sports du Québec (MELS). Association Francophone Internationale de Recherche Scientifique en éducation (AFIRSE).
  22. 22. MySQL. (2013). MySQL Website [On Line]. Access: http://www.mysql.com.
  23. 23. Moodle. (2013). Moodle Website [On Line]. Access: http://www.moodle.org.
  24. 24. Blackboard. (2013). Blackboard [On Line]. Access: http://www.blackboard.com.
  25. 25. Nance, W. D. (2000). Improving information systems students' teamwork and project management capabilities: Experiences from an innovative classroom. Information Technology and Management, 1 (4), 293-306.
  26. 26. Lavallée, M. (1969). Taxonomie des objectifs pédagogiques. Tome 1: Domaine cognitif. Québec: Presses de l’Université du Québec (Éducation nouvelle).
  27. 27. Krathwohl, D. R., Bloom, B. S. & Masia, B. B. (Eds.). (1964). Taxonomy of educational objectives: Handbook II: The affective domain. New York: McKay.
  28. 28. Hubert, S. & Denis, B. (2000). Des outils pour évaluer les compétences transversales, Actes du 1er Congrès des chercheurs en Éducation, 24-25 mai 2000, Bruxelles.
  29. 29. Jeunesse, C. (2007). Évaluer un apprentissage en ligne : éléments théoriques & pistes de réflexion. In J. C. Manderscheid, & C. Jeunesse (Eds.), L'enseignement en ligne à l'université et dans les formations professionnelles. Pourquoi? Comment? Louvain-la-Neuve: De Boeck.
  30. 30. Endrizzi, L. & Rey, O. (2008). L’évaluation au cœur des apprentissages [On Line]. Dossier d’actualité n°39, Service de veille scientifique et technologique INRP. Access: http://www.scribd.com/doc/8574511/levaluation-au-coeur-des-apprentissages.
  31. 31. Louis, R. & Bernard, H. (2004). L’évaluation des apprentissages en classe: théorie et pratique. Montréal: Groupe Beauchemin, éditeur ltée.
  32. 32. Tardif, J. (2006). L’évaluation des compétences. Documenter le parcours de développement. Montréal: Chenelière Éducation.
  33. 33. Palm, T. (2008). Performance assessment and authentic assessment: A conceptual analysis of the literature [On Line]. Practical Assessment, Research & Evaluation, 13 (4). Access: http://pareonline.net/pdf/v13n4.pdf.
  34. 34. Wiggins, G. (1990). The case for authentic assessment [On Line]. Practical Assessment, Research & Evaluation, 2 (2). Access: http://PAREonline.net/getvn.asp?v=2&n=2.
  35. 35. Wiggins, G. (1993). Assessment: Authenticity, context and validity. The Phi Delta Kappan, 75 (3), 200-214.
  36. 36. Hart, D. (1994). Authentic assessment: A handbook for educators. New York: Addison Wesley.
  37. 37. Rennert-Ariev, P. (2005). A theoretical model for the authentic assessment of teaching [On Line]. Practical Assessment, Research & Evaluation, 10 (2). Access: http://pareonline.net/pdf/v10n2.pdf.
  38. 38. Marin-Garcia, J. A. & Lloret, J. (2008). Improving teamwork with university engineering students. The effect of an assessment method to prevent shirking. WSEAS Transactions on Advances in Engineering Education, 1 (5), 1-11.
  39. 39. Swan, K., Shen, J. & Hiltz, R. (2006). Assessment and collaboration in online learning. Journal of Asynchronous Learning Networks, 10 (1), 45-62.
  40. 40. Boud, D., Cohen, R. & Sampson, J. (1999). Peer learning and assessment. Assessment & Evaluation in Higher Education, 24 (4), 413-426.
  41. 41. MacDonald, J. (2003). Assessing online collaborative learning: Process and product. Computer & Education, 40, 377-391.
  42. 42. Worcester Polytechnic Institute (WPI) (2011). Teamwork and teamwork assessment [On Line]. Access: http://www.wpi.edu/Images/CMS/IQP/teamworkandteamworkassessment.doc.
  43. 43. Durham, C. C., Knight, D. & Locke, E. A. (1997). Effects of leader role, team-set goal difficulty, efficacy, and tactics on team effectiveness. Organizational Behavior and Human Decision Process, 72 (2), 203-231.
  44. 44. Lingard, R. W. (2010). Teaching and assessing teamwork skills in engineering and computer science. Journal of Systemics, Cybernetics and Informatics, 18 (1), 34-37.
  45. 45. Goldfinch, J. (1994). Further developments in peer assessment of group projects. Assessment & Evaluation in Higher Education, 19 (1), 29-35.
  46. 46. Goldfinch, J. & Raeside, R. (1990). Development of a peer assessment technique for obtaining individual marks on a group project. Assessment & Evaluation in Higher Education, 15 (3), 210-231.
  47. 47. Northrup, S. G. & Northrup, D. A. (2006) Multidisciplinary teamwork assessment: Individual contributions and interdisciplinary interaction. Communication présentée à la 36ième conférence ASEE/IEEE Frontiers in Education,Octobre 28-31, San Diego.
  48. 48. Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H. & Krathwohl, D. R. (Eds.). (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York: Davis McKay.
  49. 49. Krathwohl, D. R. (2002). A revision of Bloom's taxonomy: An overview. Theory into Practice, 41 (4), 212-218.
  50. 50. Bastiaens, T. (2007). Instructional design of authentic E-learning environments. Oral presentation at E-Learn 2007, Québec, Canada.
  51. 51. Saskatchewan Professional Development Unit (2003). Performance Assessments: A wealth of possibilities [On Line]. Access: http://www.sasked.gov.sk.ca/branches/aar/afl/docs/assessment_support/perfassess.pdf.
  52. 52. Olivier, L. (2002). L’évaluation des apprentissages [On Line]. Commission Scolaire de Montréal. Access: http://www.csdm.qc.ca/Csdm/Administration/pdf/reforme_vol1_no2.pdf.
  53. 53. Bibeau, R. (2007). Des situations d'apprentissage et d'évaluation (SAE) sur Internet. Revue de l'association EPI [On Line], 91. Access: http://www.epi.asso.fr/revue/articles/a0701a.htm.
  54. 54. Willey, K. & Freeman M. (2006). Completing the learning cycle: The role of formative feedback when using self and peer assessment to improve teamwork and engagement. Proceedings of the 17th Annual Conference of the Australasian Association for Engineering Education, 10-13 décembre 2006, Auckland, New Zealand.
  55. 55. Willey, K. & Freeman, M. (2006). Improving teamwork and engagement: The case for self and peer assessment [On Line]. Australasian Journal of Engineering Education, 02. Access: http://www.aaee.com.au/journal/2006/willey0106.pdf.
  56. 56. Marshall-Mies, J. C., Fleishman, E. A., Martin, J. A., Zaccaro, S. J., Baughman, W. A. & McGee, M. L. (2000). Development and evaluation of cognitive and metacognitive measures for predicting leadership potential. Leadership Quarterly, 11 (1), 135-153.
  57. 57. Kaye, W. & Mancini, M. E. (1986). Use of the Mega Code to evaluate team leader performance during advanced cardiac life support. Critical Care Medicine, 14 (2), 99-104.
  58. 58. Davis, G. B., Olson, M. H., Ajenstat, J. & Peaucelle, J. L. (1986). Systèmes d'information pour le management. Volume I: Les bases. Boucherville: Éditions G. vermette Inc.
  59. 59. Burch, J. G. & Grudnitski, G. (1989). Information systems: Theory and practice (5th Edition). New York: John Wiley.
  60. 60. Davis, G. B. & Olson, M. H. (1985). Management information systems: Conceptual foundations, structure, and development. New York: McGraw-Hill.
  61. 61. Laudon, K. C. & Laudon, J. P. (2000). Management information systems: Organization and technology in the networked enterprise (6th Edition). Upper saddle River: Prentice Hall.
  62. 62. Laudon, K. C., Laudon, J. P. & Brabston, M. E. (2011). Management information systems: Managing the digital firm (5th édition canadienne). Toronto: Pearson Education Canada Inc.
  63. 63. Kanter, J. (1984). Management information systems (3th Edition). New Jersey: Prentice Hall.
  64. 64. Chin, J. P., Diehl, V. A. & Norman, K. L. (1988). Development of an instrument measuring user satisfaction of the human-computer interface. Proceedings of SIGCHI ‘88, p. 213-218, New York. ACM/SIGCHI.
  65. 65. Richey, R. C. & Nelson, W. A. (1996). Developmental research. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology (p. 1213-1245). New-York: Mac Millan.
  66. 66. Savoie-Zajc, L. (2004). La recherche qualitative/interprétative en éducation. In T. Karsenti & L. Savoie-Zajc (Eds.), La recherche en éducation: étapes et approches (p. 123-150). Sherbrooke: Éditions du CRP.
  67. 67. Savoie-Zajc, L. & Karsenti, T. (2004). La méthodologie. In T. Karsenti & L. Savoie-Zajc (Eds.), La recherche en éducation: étapes et approches (p. 109-121). Sherbrooke: Éditions du CRP.
  68. 68. Borg, W. R. & Gall, M. D. (1983). Educational research: An introduction (4th Edition). New York: Longman.
  69. 69. Nonnon, P. (1993). Proposition d'un modèle de recherche Développement (R&D) technologique en éducation. Regards sur la robotique pédagogique. Technologies nouvelles et éducation. Publications du service de technologie de l'éducation de l'Université de Liège et de l'Institut nation de recherche pédagogique, Paris, pp. 147-154.
  70. 70. Cervera, D. (1997). Élaboration d’un environnement d’expérimentation en simulation incluant un cadre théorique pour l’apprentissage de l’énergie des fluides. Thèse de doctorat inédite, Université de Montréal.
  71. 71. Van der Maren, J. M. (2003). La recherche appliquée en pédagogie: Des modèles pour l’enseignement (2ième Édition). Bruxelles: De Boeck.
  72. 72. Nonnon, P. (2002). La R&D en éducation. Contribution aux actes du symposium international francophone sur les technologies en éducation de l’INRP sous la direction de Georges Louis Baron & Éric Bruillard, Paris, France (p. 53-59).
  73. 73. Ministère de la défense nationale, Gouvernement du Canada. (2007). A-CR-CCP-701/PF-002 – Guides pédagogiques de l'étoile verte. Ottawa, Ministère de la défense nationale, Cadets royaux de l'armée canadienne.
  74. 74. Sittig, D. F., Kuperman, G. J. & Fiskio J. (1999). Evaluating physician satisfaction regarding user interactions with an electronic medical record system. Proceedings of the American Medical Informatics Association (AMIA) Annual Symposium, 400-404.
  75. 75. LabMÉCAS. (2013). Laboratoire mobile pour l'étude des cheminements d'apprentissage en science (LabMÉCAS) [En ligne]: Access: www.labmecas.uqam.ca/.
  76. 76. Boucher-Genesse, F., Riopel, M. & Potvin, P. (2011). Research results for Mecanika, a game to learn Newtonian concepts. In Games, learning and society. Conference proceedings, Madison, Wisconsin.
  77. 77. Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 17 (2), 445-459.
  78. 78. Kozma, R. B. (1994). Will media influence learning: Reframing the debate. Educational Technology Research and Development (ETR&D), 42 (2), 7-19.

Written By

Martin Lesage, Gilles Raîche, Martin Riopel, Frédérick Fortin and Dalila Sebkhi

Submitted: 26 November 2014 Reviewed: 19 May 2015 Published: 21 October 2015