Open access peer-reviewed chapter

Formative E-Assessment of Schema Acquisition in the Human Lexicon as a Tool in Adaptive Online Instruction

Written By

Guadalupe Elizabeth Morales-Martinez, Yanko Norberto Mezquita-Hoyos, Claudia Jaquelina Gonzalez-Trujillo, Ernesto Octavio Lopez-Ramirez and Jocelyn Pamela Garcia-Duran

Submitted: 22 March 2018 Reviewed: 21 September 2018 Published: 05 November 2018

DOI: 10.5772/intechopen.81623

From the Edited Volume

From Natural to Artificial Intelligence - Algorithms and Applications

Edited by Ricardo Lopez-Ruiz

Chapter metrics overview

1,333 Chapter Downloads

View Full Metrics

Abstract

This chapter presents a comprehensive method of implementing e-assessment in adaptive e-instruction systems. Specifically, a neural net classifier capable of discerning whether a student has integrated new schema-related concepts from course content into her/his lexicon is used by an expert system with a database containing natural mental representations from course content obtained from students and teachers for adapting e-instruction. Mental representation modeling is used to improve student modeling. Implications for adaptive hypermedia systems and hypertext-based instructions are discussed. Furthermore, it is argued that the current research constitutes a new cognitive science empirical direction to evaluate knowledge acquisition based on meaning information.

Keywords

  • adaptive instruction
  • technology-enhanced assessment
  • human lexicon
  • formative e-assessment

1. Introduction

A significant number of cognitive oriented adaptive hypermedia systems (AHSs) for learning have been developed. Due to the alternative formative character of AHSs emphasizing learning processes during learning [1], many of these systems are developed mainly by considering users´ cognitive styles, learning style [2, 3, 4, 5], previous knowledge before or during an AHS learning [6, 7, 8], or intellectual [9].

Typically, an AHS approach demands two types of information processing to achieve two goals [10]. One process consists of gathering information (dependent variables typifying personal and psychological attributes of a user [6, 7, 8]), which is used to assign a user to one of several learner models (cognitive classification). Based on the results, a second process adapts the hypermedia instruction (e.g., adaptive content selection, adaptive presentation, and adaptive navigation [11]). Figure 1 illustrates these processes.

Figure 1.

Typical approach in developing adaptive hypermedia technologies.

Note from Figure 1 that achieving the second goal depends completely on achieving the first goal, that is, selecting a learner model. Thus, any weakness in achieving proper student classification demands urgent corrective behavior within the adaptation process to accurately infer the user goals and thus offer navigation support and content adaptation during instruction. Unfortunately, more often than not, the construction of user models is based on weak data collection (descriptive and/or psychological data), and this weakness leads to the implementation of mechanisms to enhance adaptation processes by minimizing the cost of adaptive behavior and increasing user control over adaptation [12], improving [13], and addressing user variability [14], etc. In other words, this process is driven by the corrective adaptivity of the system rather than adaptability in which the user can consciously participate in the adaptation [15].

It is assumed that weaknesses in student modeling frequently stem from using cognitive tools that are controversial, either poorly structured or poorly developed, and many tools are famous for lacking robust empirical support (e.g., learning style/cognitive style instruments [16]). Generally, these tools do not have a good reputation in cognitive science.

Thus, from a cognitive science point of view, there is clearly much to say about student modeling. As we will discuss in the next sections, by digital implementation of more sophisticated cognitive science tools to study human learning and by introducing a third goal regarding assessment in typical adaptive instruction systems, research directions can be expanded to provide innovation in student modeling to enhance AHS.

Advertisement

2. Considerations on cognitive science of human learning and cognitive modeling of students

Common sense in formal education assumes that the better we understand learners cognitive functioning during learning, the more effective the instruction can be achieved. Inside the educational technology fields, many intelligent tutoring systems (ITSs) claim to do this by modeling the way students take decisions [17] and solve problems while they are socializing [18] or by considering users emotional states during instruction [19]. Even when this approach has many positive implications to research and development on cognitive ergonomics and engineering psychology [20], it is our strong belief that the current state on ITS is still far from inheriting positive implications from cognitive science research advances. For instance, AHS innovation, instead of considering cognitive research to innovate student modeling to improve error-type analysis of learner’s performance during learning, has rested on corrective adaptability of instruction to support learning outcomes [17]. This kind of evaluating a learner’s performance resembles summative assessment of learning where the goal is to specify what a student does not know at the end of a course rather than knowing what a student knows during and after learning like in formative assessment of learning approaches [21]. This approach to evaluate learning can be extrapolated to many fields of digital educational technology [22, 23].

To our current paper goals and to illustrate this point in a deeper way, we will introduce next a discussion inside the context of adaptive hypermedia systems (AHSs) to emphasize how education technology development is strengthened when contextualized by basic cognitive research. Here the main goal is to speak in favor of:

  1. Innovating education technology by constantly binding basic cognitive science research advances to develop education technology.

  2. Considering new empirical directions to integrate assessment of learning and instruction into single parallel formulations to support adaptability of instruction.

Thus, the following description of a formative-oriented AHS computational system is brought as an example on how improvement opportunities are at disposal for ITS research and development. This is achieved by focusing our attention on considering the human lexicon as the starting point to develop an AHS to support constructive learning outcomes.

Advertisement

3. The human lexicon as a potential cognitive construct to implement AHSs

The human mental lexicon is considered a memory capacity to store and meaningfully organize single concepts by connecting them through different types of semantic relations (a mental dictionary). This definition of one of our mental capacities was first appointed by Treisman in 1961 [24], and it is considered a central cognitive structure for language description and human learning (e.g., learning a language).

As it has been the case for most cognitive constructs introduced to explain the human mind, to consider a human lexicon as part of our cognitive architecture has not been an easy task. After heated academic debates, several views (cognitive models) regarding the lexicon have emerged, leaving different research groups to enroll into different theoretical considerations or views ranging from the possibility of a mental dictionary-like system up to the possibility of a no-lexicon view. Thus, currently, three dominant views prevail to guide academic research on this topic [25]: the multiple lexicons view implying different system stores for different lexical information like sensorimotor information, emotion or spatial information [26, 27], the single-lexicon view where all lexical levels are integrated [28], and the no-lexicon view (lexical knowledge without a mental lexicon [29]).

In spite of controversy regarding this topic, the concept of a human lexicon has been appealing enough to bring attention from education technology developers. For instance, Salcedo et al. [30] presented an adaptive hypermedia model (LEXMATH) that can be used as an opportunity to illustrate this point. Specifically, these authors argued that by considering a student’s lexicon, learner modeling is optimized. In this AHS model, students´ lexicons regarding general or specific topics are obtained through surveys and are maintained in a database. An ideal lexical domain is obtained from teachers, and during instruction, an expert system optimizes learning paths by adapting navigation support and teaching activities to minimize differences between students´ lexicons and the provided ideal lexical domain in field of mathematics.

These types of models point to a more robust direction to innovate student modeling since it empowers the AHS technology with a developed theoretical framework regarding human mental representation but still incomplete. However, notice that LEXMATH does not subscribe to a specific view or specific model within an academic view of the human lexicon. This model seems to rest on a commonsense view of considering a dictionary-like view of the human lexicon. This excludes the system from using robust methodology to assess specific assumptions of lexical behavior (especially regarding learning) promoted by a cognitive model. Rather, LEXMATH again describes a kind of error-type analysis approach to minimize differences between an expert and a learner where lexical knowledge acquisition (modification) uses indicators unfamiliar to robust cognitive lexicon views. As pointed before, this is not so uncommon since this approach to support cognitive-based instruction based on minimizing differences is frequently used inside modern approaches of ITS or AHS.

As we will describe next, alternative new empirical research directions that impose a strongest connection between basic cognitive research and education technology implementation empower innovation without losing our old tricks to ITS and AHS development. Specifically, to continue with our lexicon model discussion, it is described a cognitive constructive-chronometric system to asses human lexical oriented learning and at the same time improving student modeling to minimize corrective adaptability.

Interestingly, this model subscribes to the third view of the human lexicon, which is the no-lexicon view. As it is expected, whenever an academic effort subscribes to a specific view, it immediately inherits academic criticisms form alternative views. However, by taking this step forward, some advantages are obtained:

  1. Methodology is obtained to measure specific assumptions about how lexical knowledge is acquired.

  2. In contrast to other alternative lexicon views, the no-lexicon proposal is computational plausible under consideration of recent advances in computer science to model learners, namely connectionism models of mental knowledge representation.

  3. Most importantly, as it will be described, the use of artificial neural net classifiers (ANNs) allows researchers to deal with cognitive theoretical developments, suggesting that schemata to assimilate knew knowledge does not really exist in memory but knowledge schema emerges as required for learning and thinking purposes.

Finally, by embedding these cognitive precepts about the human lexicon into AHS development, a prominent role is given to articulate dynamic assessment of learning to adapt and support digital instruction. This requires another way to explore adaptability of an ITS.

Advertisement

4. Adaptive instruction and the constructive/chronometric e-assessment approach

Instruction and assessment are integral parts of teaching to improve students´ experiences [31]. For instance, learning-oriented assessment (also referred to as formative assessment or assessment for learning) requires active participation of students in using feedback and self-monitoring from instruction and assessment as keys to successfully acquire appropriate new knowledge from a course [32]. It is assumed that assessment provides explicit and implicit messages to facilitate a student’s academic performance.

Let us first present a general framework of implementing dynamic assessment inside the context of AHS development. In this proposal, assessment is assumed to exert effects at various levels, and it constitutes by itself a domain and a goal. Figure 2 illustrates this point.

Figure 2.

Diagram of how continuous assessment of student knowledge acquisition affects various levels of processing in an AHS learning session.

An e-assessment system that complies with these evaluation requirements, implementation viability, and cognitive science principles was first presented by Morales and colleagues [33, 34, 35]. At the core of their assessment system (EVCOG, for cognitive evaluator), there is a neural net classifier capable of identifying students who have integrated schema-related concepts from a school course into their lexicon (this schema-related concepts are obtained by using natural semantic nets; Figure 3A). The neural net classification capacity is based on the cognitive fact that once a student has integrated new knowledge into her/his long-term memory, a semantic priming effect (in a semantic priming study) is obtained from schema-related words only if meaningful long-term learning has occurred (single-word schemata priming [36, 37]). Thus, the classifier uses a student’s schema-related word-recognition times to assess whether the student has integrated new knowledge into long-term memory or has retained information in her/his short-term memory (e.g., to pass a test) or no new schemata were acquired at all. Figure 3B shows the role of this net classifier within a cognitive constructive-responsive/chronometric assessment of learning [38, 39].

Figure 3.

Concepts related to schema course content are selected during a constructive evaluation (A). These concepts can be used to assess whether a student integrated new information into her/his long-term memory using digitized cognitive semantic priming techniques (chronometric evaluation; B). Word-recognition latency patterns are used by the neural net to discriminate between successful and unsuccessful learners.

To train a classifier, hundreds of successful and unsuccessful learners´ schema-related word-recognition patterns are presented to it. Achieving this requires first obtaining schema-related concepts from students before and after a course (after learning).

Figure 4A shows a computer system for obtaining students’ and teachers’ concept definers to target schema-related concepts using a technique called natural semantic net mapping. This technique produces definitions (using single concept definers such as nouns and adjectives) for represented objects based on their meanings and not on free associations or pure semantic category memberships [40, 41].

Figure 4.

The schema-related concepts (Left) used to train a neural net through semantic priming studies are obtained from simulated connectionist schemata behavior (b), (c), that is based on teachers’ and students’ conceptual semantic nets (a).

In this technique, the 10 highest-ranked definers of each target concept (SAM group) can be used to draw a semantic net, if desired. Some concepts serve as definers for more than one target concept. These are common definers, and other definers and target concepts are interconnected through them. Numerous common definers tend to emerge whenever there are close links among target concepts (schemata).

A constraint satisfaction neural net (CSNN) is developed from concept cooccurrence through SAM groups such that the probability that two concepts cooccur or do not cooccur becomes their weight association in a rectangular matrix, with k possible connections with N concepts such that k = N(N − 1)/2. Thus, the weight association between two concepts (W) is calculated using the following derivative of the Bayesian formula:

Wij = ln p X = 0 & Y = 1 p X = 1 & Y = 0 p X = 1 & Y = 1 p X = 0 & Y = 0 1 , E1

where X represents one concept in a pair of concepts to be associated, and Y represents another concept. In determining association values among concepts in a natural semantic network such as the one selected earlier, the joint probability value p(X = 1 and Y = 0) can be obtained by calculating how often the definer X of a pair of concepts appears in a list of definers in which Y does not appear, and likewise for the other probability values. These association values are used as an input matrix to the CSNN to simulate schemata of interest [42] (Figure 4B and C), and a large set of metrics for concept organization and structure can be obtained [41]. From schema simulations and semantic net analysis, schema-related word pairs are selected to implement semantic priming studies. Thus, students’ word-recognition latencies to these word pairs are presented to the classifier for student classification (Figure 4, left).

Advertisement

5. Empirical support for e-assessment based on the human lexicon

To better describe these concepts, we will describe data resulting from application of constructive-chronometric assessment in an undergraduate psychology course on the computational mind. Figure 5A and B shows partial instances of definitions obtained from a set of 10 schema-related target concepts relevant to this course before learning (Figure 5, top panels) and after learning (Figure 5, bottom panels). The following target concepts were provided by the teacher of the course: mind, computation, von Neumann, Turing machine, connectionism, memory, computational mind, working memory, long-term memory, and HPI (human information processing).

Figure 5.

Ten relevant concept definers (SAM group) used to define schema concepts (A) are obtained by a computer system to obtain natural semantic nets (B) before and after a course on the computational mind. Cooccurrence weight associations among concepts (C) and GEPHI analysis (using the Yifan Hu algorithm and (D) can be produced using this semantic mapping technique.

In developing a natural semantic net, participants are allowed 60 seconds to provide concept definers. Then, following each definition task, they rank each concept definer (between 1 and 10) in terms of how well they define the target concept. After the system has randomly presented target concepts, it calculates the 10 highest-ranking definers for each target (SAM group; Figure 5A). For later consideration in building an expert system, note that Figure 5A shows that the M value corresponds to the sum of ranks assigned by all the participants to each definer concept. This value is a measure of the definition relevance for the target concept. Other values, such as the density of the net (G value) and the richness of the definers for each target (J value), are also calculated [40].

Note also that in Figure 5B, before learning (top panel), some of the targets lacked complete definitions. Moreover, lower common definers are obtained before learning. This lack of connectivity is reflected when a weight association matrix among concepts is calculated using Eq. (1) (Figure 5C, top). This is not the case for the weight association symmetric matrix obtained after learning, shown in Figure 5C (bottom). In turn, these connectivity matrixes can be used as an input matrix to many visualization tools, as shown in Figure 5D. Before learning, the visual concept organization allows one to immediately note that all the definers were arranged in two main groups connected by a single central one (PROCESSES). In contrast, at the end of the course, the net consists of a more sophisticated concept organization resembling a small world structure characterized by a set of highly clustered neighborhoods and a short average path length in which a small number of well-connected nodes serve as hubs. This net is a normal result of learning when using this technique [41].

This approach to evaluating learning emphasizes two aspects. First, the semantic net focuses on identifying meaning formation. For instance, at the end of the course, students centered their meaning formation around the core concepts of the computational mind: symbol; mind and brain; and the leading figures in this academic field, Turing and von Neumann. The teacher confirmed that this was the intent.

The weight matrix is used by a CSNN to simulate schema behavior, as shown in Figure 6. Here, there is 100% activation of the SYMBOLS input. As a result, MIND and DURATION were the only output activated concepts. When the students were asked about this result, they argued that according to what they learned from the course, a core concept in cognitive theory is that all mind activities occur in time, even symbol processing and construction. This schema acquisition was also intended by the teacher. In addition, note from the surface plot in Figure 6 that balanced positivity and negativity of weight association values (from +10 to −60) enhanced correct discrimination among the schema-related concepts.

Figure 6.

User interface for modeling schema-based behavior, and a surface plot of its underlying weight association matrix (bottom center).

By selecting schema-related concepts from the computer models and semantic definers relevant to meaning formation (e.g., emergence of common definers in SAM groups or concepts relevant to a schema), schema word pairs can be selected to perform a semantic word priming study.

Schema-related concepts following the course involve longer word-recognition times since a whole schema is activated (not simply a lexical association).

To illustrate this point, Figure 7 shows interaction graphs describing a frequent result on schemata word-related time recognition. Figure 7A shows that at the beginning of the course, schemata related are not significantly differentiated from other semantically related word pairs. This is not the case at the end of the course where students required significant higher processing time to recognize schemata words (schemata priming).

Figure 7.

Students’ word-recognition latency times corresponding to associative, schema-related, and nonrelated words (A). Comparison of schema priming effects obtained from this study and from similar studies involving other knowledge domains (B).

This schema priming is assumed to occur when schema information is stored in long-term memory, which likely explains why the neural net (after training) is useful for discriminating between successful and unsuccessful learners.

This is relevant because even when we cannot see the existence of a schema in the lexicon, we can track its footsteps as evidence that long-term learning has occurred. On the other hand, it is not necessary to specify a lexicon; it is enough to say that lexical information is obtained and organized as proposed by a no-lexicon view. Figure 7B shows that this effect might vary depending on the knowledge domain and the effect of instruction [34, 37, 39].

Several possibilities are introduced by considering a cognitive assessment of learning like the one just described. Let us consider a study [43] carried over 60 first-semester bachelor engineering students who took a course on computer usability. Here, 15 students failed to pass the course, but after a post-season corrective course, they succeeded and achieved the course credit. Figure 8 shows the mental concept representations obtained by a constructive-chronometric assessment of learning before and after the corrective course.

Figure 8.

A fractured mental representation on computer usability (A), changing after a corrective course (B).

Note that at the beginning of the course, the EVCOG system shows that students have a mental representation with separated concept clusters (A). This leads to confusion in terms of meaning of a topic. After a corrective course, students presented a single unified schemata knowledge where DESIGN, INTERFACE, and USER showed meaningful centrality to knowledge representation (B). The teacher in charge of the course argued that after looking at the system cognitive report at the beginning of the course, she tried meaningful integration of topics by having the concept of DESIGN as the main reference for meaning formation. Chronometric assessment provided support to this learning process since schemata priming was not obtained at the beginning of the learning period but appeared at the end of classes supporting the idea that students not only successfully passed the course but also obtained long-term retention of schemata.

Is it possible to obtain the same results with an ITS-AHS? Well, notice that now the problem is not cognitive modeling students but instructors. As it will be described next, current academic efforts are being made on this direction.

Advertisement

6. Adaptive e-instruction through e-assessment in e-learning environments: a proposal

Up to this point, the discussion of applying e-assessment to navigation support and adaptation of content has focused only on AHS. Note, however, that the same arguments can be applied to alternative e-instruction systems or alternative e-instruction. For instance, adaptive navigation support for AHS or in e-instruction can be implemented using the same assumptions by considering the model presented in Figure 9.

Figure 9.

Proposal for adaptive instruction/assessment instruction.

6.1. The student model

In a functional adaptive instruction system such as the one shown in Figure 8, the student model is a domain-specific well-trained classifier. Empirical research in several knowledge domains has shown that this type of classifier yields successful classification in 95–98% of instances [38].

6.2. Expert model: determining concept organization of meaning formation

During the defining of a target concept (in natural semantic net mapping), after a student decides which is the highest-ranking concept definer (indicated by its M value), the next-highest-ranked concept from the set of definers depends on the concept frequency (F) in the definition task and the time required to produce it, that is, its interresponse time (ITR; see right column in the SAM group in Figure 5A). Thus, the M value of each definer can be correctly predicted (98% accuracy) using the following equation [41]:

M = A e ( B / F + C I T R ) + D ln ( F ) E2

where A, B, C, and D are constants obtained from fit analysis. Here, word position in a SAM group is needed only to identify which definer ranks higher since the concept frequency has already been used to filter the SAM group.

Consider the case of a user searching for information on a web page (information foraging). This page must contain linked concepts sufficient for meaning formation (obtained using a natural semantic net). Then, after calculating the M values of selected concepts (considering the time taken by the user to select an available concept; ITR), a comparison can be made to check if the M values corresponding to searching for information on a web page correspond to a proper path of optimized M values corresponding to ideal meaning formation [44].

To illustrate this point, consider Figure 9. Here, a user has an initial representation state or initial meaning of web contents. This initial user conceptual organization is not assumed to be identical to the concept organization in a web page (isomorphic) but homomorphic. Information foraging through time (R) is based on a user cognitive strategy to obtain meaning from contents. Thus, transforming conceptual organization (T) and acquiring new concepts serve to obtain valid homomorphic representation of contents such that T’R = RT. A transformation path can be specified as:

R T S t O t = T R S t O t , E3

where O(t) relates to a specific conceptual organization (defined by natural semantic net parameters), which in turns defines R. Furthermore, by using some basic notation from automata theory [45], it is possible to specify a transition rule from 3 as follows:

q w = q x a = T ´ R S t O t , E4

where meaning formation implies regulation of a transition rule ∂′(q, w) = T′ (Figure 10).

Figure 10.

Building a mental model from web page contents through meaning formation [44].

For example, consider a set of 10 highest-ranked concepts that provide most of connectivity in a natural semantic network [q0, q1, q2, q3, q4, q5, q6, q7, q8, q9]. Here, proper meaning formation requires going from q0 to q9. Now suppose that after information foraging, a user produces a transition set like [q1, q6, q0, q3, q5, q7, q4, q9, q8, q10] such that:

Natural semantic net ∩ information foraging [q0, q1, q3, q4, q5, q6, q7, q8, q9]

Since user exploration of contents only missed one relevant semantic concept (q2), it is assumed the user obtained a valid homomorphic mental representation of the meaning implied inside the web page even when the concept path position (estimated M value) is high (by considering Eq. (2)).

The expert system control mechanisms adapt navigation links that minimize differences between information foraging values and meaning formation (determined by transition rules specified by Eq. (4)), as well as by using the neural net classifier information (successful vs. unsuccessful integration of information in the user’s lexicon).

6.3. Expert model: inference engine

The expert system includes a PROLOG backward-chaining inference engine that allows the system to build a valid “mental representation (GOAL)” based on natural semantic net data structures in the knowledge domain (templates) by request of a decision rule. This rule system considers whether schema priming for a specific module has been achieved by consulting the neural net classifier and by comparing the obtained path M values against an ideal descending organization of M values. If a semantic effect is not obtained, then the following events occur:

  1. The subsequent knowledge modules remain disabled.

  2. The system instructs the inference engine to use the database to construct the closest mental representation based on the user’s concept path (link set). Then, the navigation is modified based on the template that best approximates the user’s initial exploration, and the user is prompted to try again.

Currently, research is being performed to achieve a dynamic optimization of search information by adapting navigational support based on minimization of differences between meaning values of the user and knowledge domains rather than waiting for the user to complete a knowledge module.

6.4. Knowledge domain

An adaptive e-instruction system (AHS/hypertext) within the present scope requires a database containing natural semantic networks similar to those described earlier. Here, templates are data structures containing SAM groups and their semantic values in which information can be accessed by a PROLOG-based inference engine. As the sample for developing these SAM groups is enlarged, better predictions for adapting navigational support can be achieved.

6.5. Interface

As shown in Figure 11A, when a student begins a learning session, she/he is presented with a menu of options of the course content.

Figure 11.

Interface showing a menu of options of instruction (A), and assessment (B).

Before and after exploring each module, a semantic priming study must be performed to provide the expert system module with information for adapting the navigation support by modifying the link structure in a module based on a meaning formation template. After a module or an entire course is completed, the user can obtain a cognitive performance report (Figure 11B). This report serves as explicit assessment results that empower a user to adapt the searching of content (encouraging adaptability), whereas the link modification is an implicit message of corrective adaptability to improve proper meaning formation of the content. Selection of learning activities either using hypermedia or by modification of knowledge content depends on the expert model’s evaluation of the user’s meaning formation.

Advertisement

7. Conclusions

The goal of the proposed system is to promote assessment tasks as learning tasks, student involvement in assessment, and forward-looking feedback in adaptive e-instruction systems [32]. This system reduces the enormous delay in e-assessment innovation: e-assessment has been limited to mere digitization of traditional, sometimes ancient, evaluation methods [35].

A new empirical research line is opened in which student modeling is improved by using tools of cognitive science in adaptive e-learning systems in ways that were not possible before. We believe that research exploring the human lexicon as a way to adapt instruction will be at the center of future developments in AHS/hypertext.

Advertisement

Acknowledgments

This study was supported by DGAPA-UNAM with a grant from the Support Program of Research and Technological Innovation Projects (PAPIIT) <<TA400116>>.

References

  1. 1. Gouli E, Papanikolaou K, Grigoriadou M. Personalizing assessment in adaptive educational hypermedia systems. In: De Bra P, Brusilovsky P, Conejo R, editors. Adaptive Hypermedia and Adaptive Web-based Systems. Vol. 2347. Heidelberg: Springer Verlag; 2002. pp. 153-163. DOI: 10.1007/3-540-47952-X_17
  2. 2. Mampadi F, Chen SY, Ghinea G, Chen MP. Design of adaptive hypermedia learning systems: A cognitive style approach. Computers & Education. 2011;56:1003-1011. DOI: 10.1016/j.compedu.2010.11.018
  3. 3. Triantafillou E, Pomportsis A, Demetriadis S. The design and the formative evaluation of an adaptive educational system based on cognitive styles. Computers & Education. 2003;41:87-103. DOI: 10.1016/S0360-1315(03)00031-9
  4. 4. Bernard J, Chang TW, Popescu E, Graf S. Learning style identifier: Improving the precision of learning style identification through computational intelligence algorithms. Expert Systems with Applications. 2017;75:94-108. DOI: 10.1016/j.eswa.2017.01.021
  5. 5. Popescu E. A unified learning style model for technology enhanced learning: What, why and how? International Journal of Distance Education Technologies. 2010;8:65-81. DOI: 10.4018/jdet.2010070105
  6. 6. Martins AC, Faria L, Vaz de Carvalho C, Carrapatoso E. User modeling in adaptive hypermedia educational systems. Educational Technology & Society. 2008;11:194-207 Available from: https://www.jstor.org/stable/jeductechsoci.11.1.194 [Accessed: May 07, 2018]
  7. 7. Mampadi F, Ghinea G, Huang PR, Chen SY. Influence of prior knowledge and cognitive styles in adaptive hypermedia learning systems. In: Mizoguchi R, Sitthisak O, Hirashima T, Biswas G, Supnithi T, Yu FY, editors. Proceedings of the 19th International Conference on Computers in Education. Thailand: Asia-Pacific Society for Computers in Education; 2011. pp. 1-3
  8. 8. Triantafillou E, Georgiadou E, Economides AA. Adaptive hypermedia systems: A review of adaptivity variables. In: Proceedings of the Fifth Panhellenic Conference on Information and Communication Technologies in Education. Greece: Thessaloniki; 2006. pp. 75-82
  9. 9. Greene JA, Costa LJ, Robertson J, Pan Y, Deekens VM. Exploring relations among college students’ prior knowledge, implicit theories of intelligence, and self-regulated learning in a hypermedia environment. Computers & Education. 2010;55:1027-1043. DOI: 10.1016/j.compedu.2010.04.013
  10. 10. Popescu E. Dynamic adaptive hypermedia systems for e-learning. Education [thesis]. France: Universite de Technologie de Compiegne HAL archives; 2008
  11. 11. Tsianos N, Germanakos P, Lekkas Z, Mourlas C. An assessment of human factors in adaptive hypermedia environments. In: Mourlas C, Germanakos P, editors. Intelligent User Interfaces: Adaptation and Personalization Systems and Technologies. Greece: National & Kapodistrian University of Athens; 2009. pp. 1-34. DOI: 10.4018/978-1-60566-032-5.ch001
  12. 12. Tsandilas T, Schraefel MC. Usable adaptive hypermedia systems. New Review of Hypermedia and Multimedia. 2004;10:5-29. DOI: 10.1080/13614560410001728137
  13. 13. De Bra P. Pros and cons of adaptive hypermedia in web-based education. CyberPsychology & Behavior. 2000;3:71-77. DOI: 10.1089/109493100316247
  14. 14. Conati C, Gertner A, VanLehn K. Using Bayesian networks to manage uncertainty in student modeling. Journal of User Modeling and User-Adapted Interaction. 2002;12:371-417. DOI: 10.1023/A:1021258506583
  15. 15. Rodríguez V, Ayala G. Adaptivity and adaptability of learning object’s interface. International Journal of Computer Applications. 2012;37:6-13. Available from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.259.2896&rep=rep1&type=pdf [Accessed 2018-06-07]
  16. 16. Reynolds M. Learning styles: A critique. Management Learning. 1997;28:115-133. DOI: 10.1177/1350507697282002
  17. 17. Mitrovic A. Modeling domains and students with constraint-based modeling. In: Nkambou R, Bourdeau J, Mizoguchi R, editors. Advances in Intelligent Tutoring Systems. Studies in Computational Intelligence. Berlin Heidelberg: Springer Verlag; 2015. pp. 63-80. DOI: 10.1007/978-3-642-14363-2_4
  18. 18. Olsen JK, Belenky DM, Aleven V, Rummel N. Intelligent tutoring systems for collaborative learning: Enhancements to authoring tools. In: Lane HC, Yacef K, Mostow J, Pavlik P, editors. Artificial Intelligence in Education. 16th International Conference. AIED 2013. Berlin Heidelberg: Springer; 2013. pp. 900-903. DOI: 10.1007/978-3-642-39112-5_141
  19. 19. San Pedro MO, Baker RS, Gowda SM, Heffernan NT. Towards an understanding of affect and knowledge from student interaction with an intelligent tutoring system. In: Lane HC, Yacef K, Mostow J, Pavlik P, editors. Artificial Intelligence in Education. 16th International Conference. AIED 2013. Berlin Heidelberg: Springer; 2013. pp. 41-50. DOI: 10.1007/978-3-642-39112-5_5
  20. 20. Harris D. Engineering psychology and cognitive ergonomics. In: 12th International Conference EPCE International Conference on Engineering Psychology and Cognitive Ergonomics (EPCE) Held as Part of HCI International 2015; 2-7 August 2015; Las Vegas. USA: Proceedings; 2015. pp. 365-372. DOI: 10.1007/978-3-642-21741-8
  21. 21. Arieli-Attali M. Formative assessment with cognition in mind: The cognitively based assessment of, for and as learning (CBALTM) research initiative at Educational Testing Service. In: Proceeding of the 39th Annual Conference on Educational Assessment 2.0: Technology in Educational Assessment. 2013. pp. 1-11
  22. 22. Rainer L. Using semantic networks for assessment of learners' answers. In: The Sixth IEEE International Conference on Advanced Learning Technologies (ICALT ‘06); July 2005; Netherlands: Kerkrade; pp. 1070-1072. DOI: 10.1109/ICALT.2006.1652631
  23. 23. Tu LY, Hsu WL, Wu SH. A cognitive student model—An ontological approach. In: Proceedings of the International Conference on Computers in Education (ICCE ‘02); 3-6 December 2002; Auckland, New Zealand: Proceedings; 2002. pp. 163-164. DOI: 10.1109/CIE.2002.1185877
  24. 24. Coltheart MR, Perry C, Langdon R, Ziegler J. DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review. 2001;108:204-256. Available from: http://psycnet.apa.org/buy/2001-16162-009 [Accessed: May 07, 2018]
  25. 25. De Sousa LB, Gabriel R. Does the mental lexicon exist? Revista de Estudos da Linguagem. 2015;23:335-361. DOI: 10.17851/2237.2083.23.2.335-361
  26. 26. Huth AG, de Heer WA, Griffiths TL, Theunissen FE, Gallant JL. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature. 2016;532:453-468. DOI: 10.1038/nature17637
  27. 27. Ullman MT. The biocognition of the mental lexicon. In: Gaskell MG, editor. The Oxford Handbook of Psycholinguistics. Oxford, UK: Oxford University Press; 2007. pp. 267-286. DOI: 10.1093/oxfordhb/9780198568971.001.0001
  28. 28. Rogers T, McClelland JL. Semantic Cognition: A Parallel Distributed Processing Approach. Massachusetts, USA: MIT Press; 2004. DOI: 10.1017/S0140525X0800589X
  29. 29. Elman JL. On the meaning of words and dinosaur bones: Lexical knowledge without a lexicon. Cognitive Science. 2009;33:1-36. DOI: 10.1111/j.1551-6709.2009.01023.x
  30. 30. Salcedo P, Pinninghoff MA, Contreras R, Figueroa JF. An adaptive hypermedia model based on student's lexicon. Expert Systems. 2017;34:e12222. DOI: 10.1111/exsy.12222
  31. 31. Joughin G. Introduction: Refocusing assessment. In: Gordon J, editor. Assessment, Learning and Judgement in Higher Education. New York: Springer; 2009. pp. 1-11. DOI: 10.1007/978-1-4020-8905-3_1
  32. 32. Keppel M, Carless D. Learning-oriented assessment: A technology-based case study. Assessment in Education: Principles, Policy & Practice. 2006;13:179-191. DOI: 10.1080/09695940600703944
  33. 33. Lopez RE, Morales MG, Hedlefs AM, Gonzalez TC. New empirical directions to evaluate online learning. International Journal of Advances in Psychology. 2014;3:40-47. DOI: 10.14355/ijap.2014.0302.03
  34. 34. Morales MG, Lopez RE, Velasco MD. Alternative e-learning assessment by mutual constrain of responsive and constructive techniques of knowledge acquisition evaluation. International Journal for Infonomics. 2016;9:1195-1200. DOI: 10.20533/iji.1742.4712.2016.0145
  35. 35. Morales MG, Lopez RE, Castro C, Villarreal G, Gonzales TC. Cognitive analysis of meaning and acquired mental representations as an alternative measurement method technique to innovate e-assessment. European Journal of Educational Research. 2017;6:455-464. DOI: 10.12973/eu-jer.6.4.455
  36. 36. Lopez RE, Theios J. Single word schemata priming: A connectionist approach. In: The 69th Annual Meeting of the Midwestern Psychological Association, Chicago, IL. 1996
  37. 37. Gonzalez CJ, Lopez RE, Morales GE. Evaluating moral schemata learning. International Journal of Advances in Psychology. 2013;2:130-136
  38. 38. Morales MG, Lopez RE, Lopez GA. New approaches to e-cognitive assessment of e-learning. International Journal for e-Learning Security (IJeLS). 2015;5:449-453. DOI: 10.20533/ijels.2046.4568.2015.0057
  39. 39. Morales MG, Lopez RE. Cognitive responsive e-assessment of constructive e-learning. Journal of e-Learning and Knowledge Society. 2016;12:10-19. DOI: https:10.20368/1971-8829/1187
  40. 40. Figueroa JG, Gonzales GE, Solis V. An approach to the problem of meaning: Semantic networks. Journal of Psycholinguistic Research. 1976;5:107-115. DOI: 10.1007/BF01067252
  41. 41. Morales MG, Santos AM. Alternative empirical directions to evaluate schemata organization and meaning. Advances in Social Sciences Research Journal. 2015;2:51-58. DOI: http://dx.doi.org/10.14738/assrj.29.2015
  42. 42. Rumelhart DE, Smolensky P, McClelland JL, Hinton GE. Schemata and sequential thought processes. In: McClelland JL, Rumelhart DE, The PDP Research Group, editors. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological Models. Massachusetts: MIT Press; 1986. pp. 7-57
  43. 43. Gonzales CJ, Lopez RE, Hedlefs MI. A new empirical approach to assess learning outcomes due to meaning formation and constructive knowledge. In: International Conference on Education Technology Management, University of Barcelona, Spain; December 19-21, 2018
  44. 44. Torres GF, Lopez RE. Rastreo de la Información en Páginas web a través del Significado (foraging information in webpages through meaning). Daena: International Journal of Good Conscience. 2010;5:308-323. Available from: http://www.spentamexico.org/v5-n2/5(2)308-323.pdf [Accessed: June 07, 2018]
  45. 45. Hopcroft JE, Motwani R, Ullman JD. Teoría de autómatas, Lenguajes y computación (Introduction to Automata Theory, Languages, and Computation). 3rd ed. New York: PEARSON Addison-Wesley; 2008. 434 p

Written By

Guadalupe Elizabeth Morales-Martinez, Yanko Norberto Mezquita-Hoyos, Claudia Jaquelina Gonzalez-Trujillo, Ernesto Octavio Lopez-Ramirez and Jocelyn Pamela Garcia-Duran

Submitted: 22 March 2018 Reviewed: 21 September 2018 Published: 05 November 2018