Open access peer-reviewed chapter

Drawings as Diagnostic Cues for Metacomprehension Judgment

Written By

Keith Thiede, Katherine L. Wright, Sara Hagenah and Julianne Wenner

Submitted: 20 February 2019 Reviewed: 20 May 2019 Published: 19 June 2019

DOI: 10.5772/intechopen.86959

From the Edited Volume

Metacognition in Learning

Edited by Nosisi Feza

Chapter metrics overview

885 Chapter Downloads

View Full Metrics

Abstract

The accuracy of comprehension monitoring affects the effectiveness of rereading, which in turn affects comprehension. Thus, much research has focused on finding ways to improve monitoring accuracy. The cue-utilization framework of metacognitive monitoring provides a framework for understanding how to improve monitoring accuracy. It suggests that accuracy is driven by cues people use to judge comprehension. When people utilize cues that are highly diagnostic of performance on a test of comprehension, accuracy should improve. Many interventions that have been shown to improve monitoring accuracy have attributed the improved accuracy to increased access to highly diagnostic cues, but have failed to identify highly diagnostic cues. In our recent research, we found that instructing students to generate drawings before judging comprehension improved monitoring accuracy. Using graphic analyses protocol, we identified highly diagnostic cues. In this chapter, we will describe the procedure we used to identify these cues contained in drawings.

Keywords

  • metacomprehension accuracy
  • drawing
  • cue diagnosticity
  • cue utilization

1. Introduction

In their seminal work, Nelson and Narens [1] described the theoretical relation between metacognitive monitoring and regulation of behavior. Building on this work, contemporary models of self-regulated learning describe learning as the interaction between metacognitive monitoring and regulation of study [2, 3, 4, 5, 6]. In particular, according to these models, as people study, they monitor their learning and use this information to guide subsequent study. Thus, accurate monitoring is required to effectively and efficiently manage one’s study [7, 8]. If people do not accurately differentiate well-learned materials from less-learned materials, they could waste time studying material that is already well learned or worse fail to restudy material that has not yet been adequately learned. Given the important role that monitoring plays in learning, it is important to find ways to improve the accuracy of metacognitive monitoring.

Accurate metacognitive monitoring is especially important in the area of reading [7]. A number of interventions have been developed to improve the accuracy of comprehension monitoring (called metacomprehension accuracy [9]). However, only recently have researchers examined the effect of drawing on metacomprehension accuracy. The primary objective of this chapter is to present data that provide a potential explanation for the beneficial effect of drawing on metacomprehension accuracy.

To provide context for our study, we will first describe how metacomprehension has been measured. We will then present a theoretical framework that identifies key factors for improving monitoring accuracy and show how this framework can help explain why previous interventions have improved metacomprehension accuracy. Finally, we will present empirical evidence that suggests drawing improves metacomprehension accuracy by providing access to cues that are diagnostic of comprehension and facilitating the utilization of these cues when judging comprehension.

Advertisement

2. Measuring metacomprehension accuracy

Glenberg and Epstein [9] developed the now widely used procedure for studying metacomprehension accuracy. They had participants read a series of short texts, judge their understanding on each text, and then completed a test for each text.

Metacomprehension accuracy describes the relation between a person’s judgments of comprehension and actual test performance. Accuracy can be described in two distinct ways. One is the degree to which the magnitude of the judgments is related to the actual magnitude of test performance. This kind of accuracy has been called absolute accuracy (also called calibration). Absolute accuracy indicates the degree to which a person is over or under confident about his or her performance. The other measure of accuracy is the degree to which the judgments discriminate between different levels of performance across texts. This kind of accuracy has been called relative accuracy (also called resolution), is reported as the intra-individual correlation between predicted and actual performance computed across texts. Relative accuracy indicates the degree to which a person can differentiate better-learned materials from lesser-learned materials.

These measures are theoretically orthogonal [10]. That is, while absolute and relative accuracy could both be high for a person, absolute accuracy could be high while relative accuracy is low or vice versa. Moreover, variables that influence one kind of accuracy may not influence the other. For example, domain knowledge has been shown to influence absolute accuracy, but does not influence relative accuracy [11]. Thus, to avoid confusion, it is important to be clear whether one is examining absolute or relative accuracy. We will focus on relative accuracy for the remainder of this chapter.

Advertisement

3. A framework for improving metacomprehension accuracy

Understanding approaches to improving metacomprehension accuracy requires theories of both metacognitive monitoring and comprehension [12]. Rawson et al. [13] used the cue-utilization framework of metacognitive monitoring [14] and the construction-integration model of comprehension [15] to identify ways to improve metacomprehension accuracy. The cue-utilization framework suggests that people’s metacognitive judgments are not based on direct access to memory and comprehension processes; instead, judgments are based on cues people have available about the content of their memory and comprehension processes. The accuracy of metacomprehension judgments is then determined by the degree to which the cues used to judge comprehension are diagnostic (predictive) of performance on a test of comprehension.

Theories of comprehension, like the construction-integration model [15], help identify the cues that should be diagnostic of performance on tests of comprehension. According to this model, readers construct meaning from text at several levels: a lexical or surface level, a textbase level, and a situation model level. The lexical level is constructed as the words and phrases appearing in the text are encoded. The textbase level is constructed as segments of the surface text are parsed into propositions, and as links between text propositions are formed based on argument overlap and other text-explicit factors. Deeper understanding of the text is constructed at the level of the situation model, which involves connecting information from the text with the person’s prior knowledge and using it to generate inferences and implications from the text. A person’s situation model largely determines performance on tests of comprehension [16]. Thus, getting people to base their metacomprehension judgments on cues related to their situation model rather than their textbase should increase the predictive accuracy of judgments [13, 17].

As noted by Thiede and de Bruin [18], interventions designed to improve metacomprehension accuracy have attempted to focus readers on cues related to the situation model when judging comprehension. Some have sought to increase the salience of diagnostic cues by instructing readers to encode texts in specific ways that facilitate construction of the situation model, while others have sought to increase the salience of diagnostic cues by instructing readers to retrieve information about the texts prior to judging comprehension. The different approaches are to improving metacomprehension accuracy alter the standard experimental procedure developed by Glenberg and Epstein [9], these changes are illustrated in Figure 1.

Figure 1.

Approaches to improving metacomprehension accuracy.

Advertisement

4. Encoding-based approaches to improving metacomprehension

One approach shown to improve metacomprehension accuracy is to provide instructions for reading texts that promote construction of the situation model—connecting ideas in a text to one another and to one’s prior knowledge. By promoting construction of the situation model during reading, cues associated with the situation model presumably become more salient at the time of judging comprehension. Given these cues should be predictive of test performance, using these cues for judgments should increase metacomprehension accuracy.

In the metacomprehension literature, two studies have examined how metacomprehension accuracy is affected by promoting construction of the situation model while reading. Specifically, Redford et al. [19] examined the effect of concept mapping on metacomprehension accuracy. A concept map is a graphic representation of the underlying structure of the meaning of a text. Constructing concept maps can be an effective organizational strategy, which helps readers connect ideas in a text [20]. Thus, as concept mapping helps readers construct a situation model for a text, Redford et al. [19] hypothesized that generating concept maps would increase metacomprehension accuracy. In accord with this hypothesis, they showed that metacomprehension accuracy was greater for a group who generated concept maps than for a group presented concept maps during reading and a control group. Concept mapping has also been shown to improve metacomprehension accuracy for at-risk readers [21].

Another technique used to promote construction of the situation model is to have readers self-explain while reading. Chi [22] showed that self-explanation improved reading comprehension by helping them connect ideas into a more coherent representation of a text. Griffin et al. [23] hypothesized that self-explaining would help students connect ideas within a text and would focus students on cues related to the situation model when judging comprehension, thereby improving their metacomprehension accuracy. Consistent with this hypothesis, Griffin and colleagues showed accuracy was greater for a group of college students who self-explained as they read than for a group who did not self-explain while reading.

In sum, interventions that promote construction of a situation model for a text during reading improve metacomprehension accuracy. These interventions appear to focus readers on diagnostic cues for judging comprehension. The literature suggests the effects on metacomprehension are robust; interventions that promote development of a situation model have improved accuracy for typical, at-risk college students, and younger students.

Advertisement

5. Retrieval-based approaches to improving metacomprehension

Another approach to improving metacomprehension accuracy is to incorporate a retrieval attempt prior to judging comprehension into the standard procedure [24]. According to the cue-utilization framework of metacognitive monitoring [14], as the person contemplates how well a text was understood, he or she may rely on a variety of cues to make this judgment. Retrieving information about texts may allow a reader to evaluate the quality of cues used to retrieve information about a text. That is, when judging comprehension, the person may reflect on how successfully he or she had retrieved information. Accordingly, a text may receive a high rating of comprehension if the person had been able to retrieve a great deal of information about the text during the retrieval attempt. By contrast, a text may receive a low rating of comprehension if the person struggled to retrieve information about the text. Assuming availability of information during the retrieval attempt is related to availability of information at the time of the test, then using the retrieval of information as a basis for metacomprehension judgments should improve metacomprehension accuracy because the basis of the judgments should be highly related to test performance.

Accuracy of metacomprehension judgments may be affected by the timing of the retrieval attempt. Activation theories [25] may help explain why. According to these theories, spreading activation occurs during reading and more information is active in working memory shortly after reading than after a delay—when information has decayed from memory. When retrieving information immediately after reading, a person may have access to a highly active mental network. This is, even for less-understood texts, the person may have access to information in short-term memory. Thus, the retrieval attempt for less-understood and more-understood texts may produce a set of homogeneous cues for judging comprehension that may not help discriminate less-understood texts from more-understood texts. By contrast, when the retrieval attempt occurs after a delay, activation of the mental network for a text may have decayed and a person may have access to only that information retrievable from long-term memory. Thus, for a less-understood text, the person may have little to draw on when retrieving information; whereas, for a more-understood text, the person may retrieve much more information. Retrieving information after a delay may produce a set of heterogeneous cues for judging comprehension that may highlight differences between more-understood texts and less-understood texts. Moreover, cues available in long-term memory are likely to be highly indicative of test performance because both cues attempts and tests occur after a delay and are based on retrieval of information from long-term memory. Thus, retrieval after a delay may produce higher levels of metacomprehension accuracy.

Researchers have examined the effect of different retrieval tasks on metacomprehension accuracy. For instance, Thiede and Anderson evaluated the effect of writing summaries on metacomprehension accuracy [26]. They compared metacomprehension accuracy across three groups: a control group, an immediate-summary group, and a delayed-summary group. The control group read a set of texts, judged comprehension of each text, and then completed a test of each text. The immediate-summary group read a text then immediately wrote a summary for the text. After reading and summarizing each text, they made metacomprehension judgments and completed a test for each text. The delay-summary group read all the texts, they then wrote summaries for each text. After reading and summarizing all the texts, they made metacomprehension judgments and completed a test for each text. Consistent with the theory outlined above, metacomprehension accuracy was greater for the delayed-summary group than for the other groups. This effect holds for typical and at-risk college students [27].

Thiede and colleagues evaluated the efficacy of a less time consuming retrieval task on metacomprehension accuracy [28]. They had students generate a list of five keywords that captured the essence of each text. Metacomprehension accuracy was greater for the delayed-keyword group than for the immediate-keyword group or the control group. The delayed-keyword effect has been replicated with college students [29, 30] and younger students [31, 32].

van Loon et al. [33] evaluated the effect of completing an informational diagram of cause-and-effect relations on metacomprehension accuracy. Students read short texts describing cause-and-effect relations. Then they were shown a diagram of the cause-and-effect relation described in a text with key information deleted from the diagram. Participants in diagramming groups were instructed to provide the missing information. Metacomprehension accuracy was greater for the group that completed diagrams after a delay than for the group that completed diagrams immediately after reading or for the group that did not complete diagrams.

In sum, retrieving information about texts prior to judging comprehension improves metacomprehension accuracy; however, only when the retrieval attempt is delayed. A variety of retrieval tasks have been shown to improve metacomprehension accuracy. The literature suggests the effects on metacomprehension are robust; retrieval tasks have improved accuracy for typical and at-risk college students, as well as for younger students.

Advertisement

6. Drawing to improve metacomprehension accuracy

Theoretically, drawing has promise as an intervention to improve metacomprehension because it has been shown to facilitate construction of the situation model. Although the results examining the effect of drawing on learning are mixed, with some studies showing drawing improves learning [34] and others showing no benefit to drawing [35]. The results fairly consistently show that drawing improves conceptual understanding but not factual learning [36]. Put differently, deep comprehension, which requires a complete mental model, benefits from drawing.

The generative theory of drawing construction [36] helps explain the benefit of drawing on conceptual understanding and comprehension. According to this theory, readers construct a verbal representation of written words and a visual representation when drawing. Constructing a mental model of the content involves (a) selecting key elements from the verbal and visual representations, (b) organizing the key elements and connecting them to prior knowledge, and (c) integrating the verbal and visual representations into a coherent mental model. Thus, a drawing generated while reading represents a reader’s integrated verbal and visual representations, which may provide a more coherent representation of a phenomenon that a representation based purely on verbal information (e.g., a summary of a text).

A high quality drawing connects key elements and illustrates how the system as a whole functions. If a person can create a high quality drawing, he or she should be able to perform well on a test of deeper comprehension because the drawing and the test both depend on a coherent mental model. If a person cannot generate a high quality drawing, he or she should not be able to perform well on a test of deeper comprehension. Therefore, the quality of a drawing should be predictive of performance on a test of comprehension—and using drawings as a cue for judging comprehension should promote high levels of metacomprehension accuracy. Thus, drawing while reading has potential as an encoding-based approach to improving metacomprehension accuracy.

Drawings have also been shown to provide valuable feedback regarding level of understanding [37]. That is, drawings help students identify gaps in understand. Thus, drawing also has potential as a retrieval-based approach to improving metacomprehension accuracy.

Despite the theoretical appeal of using drawings to improve metacomprehension accuracy, only recently have researchers examined the effect of drawing on accuracy. In particular, drawing has been used as an encoding task [38, 39] and as a retrieval task [40]. The results of these studies are mixed; however, methodological differences make it difficult to compare the results across studies.

Drawing had no effect on metacomprehension accuracy in two studies [38, 40]. In these studies, rather than read a set of different texts and generate a drawing for each, participants read contiguous texts and generate a single drawing based on all the texts. Although generating a single drawing might help participants create a model for all the texts, generating a single drawing would not likely provide cues to help participants differentiate more-understood from less-understood texts. Without cues for individual texts to help differentiate texts, it is not surprising that drawing did not improve metacomprehension accuracy.

By contrast, Thiede et al. [39] had fifth grade students generate drawings for different science text while they read. Student then predicted their performance and completed a test for each text. This is the standard experimental procedure with the encoding-based approach to influence metacomprehension, as illustrated in Figure 1. A key finding of this study was that drawing dramatically improved metacomprehension accuracy when students received instruction on generating organizational drawing—drawings that connect ideas within a text to one another and to prior knowledge. By contrast, drawings had no effect on metacomprehension accuracy when students were not instructed to create organizational drawing. The current study builds on the study by Thiede et al.

6.1 Overview of study and study design

According to the cue-utilization framework, monitoring accuracy is dependent on cue diagnosticity (how predictive a cue is of test performance) and cue utilization (which cues a person uses for the metacognitive judgment). van Loon et al. [33] developed a procedure to decompose judgment accuracy into these two components. In particular, they examined the diagnosticity of a cue by computing the correlation between the cue and test performance across texts. Similarly, they examined cue utilization by computing the correlation between the cue and the metacomprehension judgment across the texts. As in Thiede et al. [39], we used an experimental design to examine the effect of drawing instruction on cue diagnosticity and cue utilization.

We evaluated the effect of two kinds of drawing instruction on cue diagnosticity and cue utilization. Ninety-two fifth grade students were randomly assigned to two instructional groups. Students in each group read five texts on different science topics and generated drawings as they read. They then predicted their performance, and completed a test for each text. The Organizational-Drawing group (n = 47) received instruction on generating organizational drawing of scientific texts, which emphasized including relational information in their drawing. The Representational-Drawing group (n = 45) received instruction on generating representational drawing, which emphasized including many elements in their drawings. As the organizational instructions were designed to promote connecting ideas in the text to each other and to prior knowledge, we hypothesized that this group would generate more diagnostic cues than would the group receiving representational instructions.

6.2 Potential cues for metacomprehension judgments

As noted above, theories of comprehension, like the construction integration model [15], define deeper comprehension as a representation of a text that includes connections of ideas contained in a text to each other and prior knowledge (the situation model). The metacomprehension literature suggests that metacomprehension accuracy improves when people base their metacomprehension judgments on cues related to their situation model. Moreover, studies of self-reported cue use provide evidence that accuracy is greater for people who report using cues related to the situation model (i.e., their ability to link ideas contained in a text) than for people who reported using other cues [21]. Thus, cues that provide information related to connecting ideas and use of prior knowledge should be highly diagnostic.

To examine cue diagnosticity and cue utilization of drawings, we refined the graphic analysis protocol (GAP), which had been used to score graphics contained in science textbooks [41, 42], to score student drawings of scientific texts. The GAP-drawing provides a more fine-grained measure of drawing quality than the overall measure of quality typically used in drawing literature [43]. The GAP-drawing provides scores on two broad dimensions: drawing content and drawing relations.

Drawing Content describes the composition and substance of drawings. For each text, we created a master list of the actions, elements, and big ideas described in a text. We then scored each drawing for the number of these attributes. We also scored drawings for the number of novel elements related to the topic but not explicitly described in the text and unrelated elements.

Drawing Relations describes the relations among the elements in the drawing. Based on the definition of systematicity for published graphics, the systematicity of drawings describes how well the drawing demonstrates that a reader has built a situation model of the system described in a text. Systematicity ranges from a score of 1 (low) indicates the drawing illustrated isolated units, not integrated into a larger system, 2 (medium) indicates the drawing has some aspects of the system, and 3 (high) indicates the drawing is a complete model of the system. Semantic relations describe how the text and drawing are related. Drawings earn a score of 0 when they are only vaguely related to the text context, 1 (representational) when drawings directly show what was described in the text, 2 (organizational) when drawings add coherence by putting the information within a greater scheme or system, and 3 (interpretational) when drawings that contain both representation and organizational elements, but extend this by showing how the elements are related. Connections describe whether drawings represent the information in the text and include information from the reader’s background knowledge or prior learning. A drawing scored as 0 does not add information not present in the text; 1 provides additional examples of a topic described in the text; 2 indicates the drawing includes additional examples of a process or phenomena not explicitly described in the text; and 3 appropriately connects the information to a different field of scientific study. Captions and labels can identify the parts of a diagram, the steps in a process or both. We categorized the captions and/or labels on a scale of 0–4. A score of 0 indicates a lack of captions, a 1 indicates that captions only identify the target of the graphic, a 2 indicates the captions identify parts, a 3 indicates captions identify the steps in a system, and a 4 indicates that the captions identify both the parts and steps in a system. We hypothesized that drawing relations metrics would be more diagnostic than drawing content because these metrics capture features of a situation model.

For each text, students generated a drawing as they read. Students also made a metacomprehension judgment (i.e., they predicted their performance on a five-item test of comprehension) and completed an inference test of reading comprehension for each text. Drawings were scored using the GAP-drawing. Cue diagnosticity was operationalized as the intra-individual correlation between drawing metrics and test performance. Cue utilization was operationalized as the intra-individual correlation between drawing metrics and metacomprehension judgments. To illustrate these measures and how cue diagnosticity and cue utilization influence metacomprehension accuracy, consider the example shown in Table 1.

TextJudgmentPerformanceNumber of elementsNumber of big ideasConnections
Text 1541253
Text 2232132
Text 3431811
Text 4211041
Text 5101620
Metacomprehension accuracy = 0.78
Cue diagnosticityCue utilization
Number of elements0.20−0.11
Number of big ideas0.400.33
Connections1.000.75

Table 1.

Sample data to illustrate cue diagnosticity and cue utilization.

For the student below, the number of elements was fairly weakly correlated with test performance, which indicates this is not diagnostic of performance on the test of comprehension. The number of big ideas was more strongly correlated with test performance than was the number of elements, but the correlation is only moderate. By contrast, the connections are perfectly correlated with test performance—test performance was higher for texts with higher connections scores and lower for texts with lower connections scores. Connections are a highly diagnostic cue of comprehension. Regarding cue utilization, the number of elements is weakly and negatively correlated with metacomprehension judgments, the number of big ideas was moderately correlated with judgments, and connections was highly correlated with judgments. These correlations suggest that this student used connections as bases of metacomprehension judgments and relied less on the number of big ideas and the number of elements to judged comprehension.

Cue diagnosticity and cue utilization help explain the relative high level of metacomprehension accuracy for this student (metacomprehension accuracy = 0.78). For this student, connections were a highly diagnostic cue and the students used this cue for judging comprehension accuracy. Had this student relied heavily on the number of elements to judge comprehension, metacomprehension would have been reduced because the number of elements is not predictive of test performance.

6.3 Results of study

This chapter focuses on cue diagnosticity and cue utilization; however, it is important to note that metacomprehension accuracy was significantly greater for the Organizational-Drawing group (mean metacomprehension accuracy = 0.51) than for the Representational-Drawing group (mean metacomprehension accuracy = −0.03). Cue diagnosticity and cue utilization help explain the difference in accuracy across groups.

As shown in Table 2, several drawing metrics were predictive of performance on tests of comprehension for the Organizational-Drawing group. In particular, for this group, systematicity, semantic relations, connections and the number of big ideas were are significantly correlated with test performance. By contrast, for the Representational-Drawing group, none of the drawing metrics were predictive of comprehension test performance.

Drawing metricsRepresentationalOrganizational
Drawing content
Number of actions−0.15 (0.13)0.13 (0.11)
Number of related elements−0.01 (0.11)0.18 (0.11)
Number of novel elements−0.12 (0.12)0.13 (0.13)
Number of unrelated elements−0.15 (0.14)−0.01 (0.14)
Number of big ideas−0.14 (0.18)0.42 (0.14)*
Drawing relations
Systematicity0.22 (0.23)0.24 (0.11)*
Semantic relations0.03 (0.15)0.22 (0.11)*
Connections−0.09 (0.16)0.66 (0.18)*
Number of captions−0.02 (0.13)0.15 (0.12)

Table 2.

Cue diagnosticity for drawing metrics by group.

Indicates a correlation is significantly different than zero (p < 0.05).


Note: the number in parentheses is the standard error of the mean.

These results suggest that instruction on how to generate drawings significantly affects cue diagnosticity. That is, with instruction on how to generate organizational drawings, drawing metrics related to connecting ideas to one another and to prior knowledge are predictive of performance on a test of comprehension (see the rightmost column of Table 2). It is important to note that the cues identified as diagnostic for this group are those hypothesized to be predictive of comprehension by theories of comprehension. Without instruction on generating organizational drawings, drawings do not provide diagnostic cues. Thus, for this group, drawing does little to provide useful cues for judging comprehension.

To better understand how these cues might affect metacomprehension accuracy, we need to examine cue utilization. As shown in Table 3, for the Organizational-Drawing group, a variety of drawing metrics were correlated with metacomprehension judgments, which suggests students in this group utilized a number of different drawing metrics in making their judgments. Most importantly, this group utilized four of the cues that were highly diagnostic of performance on comprehension test (i.e., systematicity, semantic relations, connections and the number of big ideas). By contrast, for the Representational-Drawing group, only connections were correlated with metacomprehension judgments. However, for this group, connections were not correlated with test performance; therefore, utilizing this cue would not contribute to a high level of judgment accuracy.

Drawing metricsRepresentationalOrganizational
Drawing content
Number of actions−0.16 (0.12)0.24 (0.09)*
Number of related elements0.14 (0.11)0.07 (0.11)
Number of novel elements0.40 (0.09)0.10 (0.11)
Number of unrelated elements0.07 (0.10)−0.11 (0.13)
Number of big ideas0.10 (0.13)0.22 (0.10)*
Drawing relations
Systematicity0.21 (0.15)0.27 (0.13)*
Semantic relations0.16 (0.11)0.24 (0.12)*
Connections0.73 (0.17)*0.60 (0.14)*
Number of captions−0.02 (0.12)0.07 (0.10)

Table 3.

Cue utilization for drawing metrics by group.

Indicates a correlation is significantly different than zero (p < 0.05).


Note: the number in parentheses is the standard error of the mean.

These results provide additional empirical evidence that metacomprehension accuracy is influenced by cue diagnosticity and cue utilization. Metacomprehension accuracy was greater for the Organizational-Drawing group than the Representational-Drawing group. Drawings provided diagnostic cues for the Organizational-Drawing group but not for the Representational-Drawing group. Moreover, diagnostic cues were utilized for metacomprehension judgments for the Organizational-Drawing group but not for the Representational-Drawing group.

Advertisement

7. Conclusions

Metacomprehension accuracy is important to reading comprehension because monitoring guides decisions about rereading [31, 44], which improves overall comprehension [32, 45]. Thus, it is important to find ways to improve metacomprehension accuracy.

The cue-utilization framework of metacognitive monitoring [14] suggests improving monitoring accuracy involves identifying cues that are highly diagnostic of test performance and then instructing people to use those cues when making judgments. Thus, as described above, researchers have employed a variety of techniques to help facilitate the construction of a situation model or access the situation model prior to judging comprehension because this arguably provides cues that are highly diagnostic of comprehension tests. Researchers have also employed other techniques to promote use of diagnostic cues when making metacomprehension judgments [18].

Recent research using drawings as an encoding task shows promise in improving metacomprehension accuracy. This research shows that drawings need to emphasize the underlying organization of the phenomenon described in the text to improve metacomprehension accuracy, which is consistent with research showing the effect of graphics on metacomprehension accuracy is determined by the nature of the graphics presented with texts [46, 47, 48]. Specifically, organizational graphics improved metacomprehension accuracy and other graphics have little or adverse effects on metacomprehension accuracy [47].

The GAP-drawing provides a scoring system to help identify specific attributes of drawings that could be diagnostic of comprehension and utilized as a basis for metacomprehension judgments. Our findings suggest that with instruction on generating organizational drawings while reading, metrics related to drawing relations are predictive of test performance (diagnostic). Moreover, the instruction promoted utilization of these cues when judging comprehension.

Instructions focused on generating organizational drawings improved metacomprehension accuracy and comprehension. Thus, drawing can influence learning. More research is needed to identify the most effective instruction for drawing. With attention to cue diagnosticity and cue utilization, this research could reshape the field of metacomprehension.

References

  1. 1. Nelson TO, Narens L. Metamemory: A theoretical framework and new findings. In: Bower GH, editor. The Psychology of Learning and Motivation. Vol. 26. New York: Academic Press; 1990. pp. 125-173
  2. 2. Ariel R, Dunlosky J, Bailey H. Agenda-based regulation of study-time allocation: When agendas override item-based monitoring. Journal of Experimental Psychology: General. 2009;133:432-447. DOI: 10.1037/a0015928
  3. 3. Metcalfe J. Is study time allocated selectively to a region of proximal learning? Journal of Experimental Psychology: General. 2002;131:349-363. DOI: 10.1037//0096-3445.131.3.349
  4. 4. Thiede KW, Dunlosky J. Toward a general model of self-regulated study: An analysis of selection of items for study and self-paced study time. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1999;25:1024-1037. DOI: 10.1037/0278-7393.25.4.1024
  5. 5. Winne PH. Cognition and metacognition within self-regulated learning. In: Schunk DH, Greene JA, editors. Handbook of Self-Regulation of Learning and Performance. Third ed. New York, NY: Routledge Press; 2017. pp. 36-48
  6. 6. Dunlosky J, Thiede KW. What makes people study more? An evaluation of factors that affect self-paced study. Acta Psychologica. 1998;98:37-56. DOI: 10.1016/S0001-6918(97)00051-6
  7. 7. Cromley JG, Azevedo R. Testing and refining the direct and inferential mediation model of reading comprehension. Journal of Educational Psychology. 2007;99:311-325. DOI: 10.1037/0022-0663.99.2.311
  8. 8. Winne PH, Perry NE. Measuring self-regulated learning. In: Boekaerts M, Pintrich P, editors. Handbook of Self-Regulation. New York: Academic Press, Inc.; 2000. pp. 531-566
  9. 9. Glenberg AM, Epstein W. Calibration of comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1985;11:702-718. DOI: 10.1037/0278-7393.11.1-4.702
  10. 10. Dunlosky J, Thiede KW. Four cornerstones of calibration research: Why understanding students’ judgments can improve their achievement. Learning and Instruction. 2013;24:58-61. DOI: 10.1016/j.learninstruc.2012.05.002
  11. 11. Griffin TD, Jee BD, Wiley J. The effect of domain knowledge on metacomprehension accuracy. Memory & Cognition. 2009;37:1001-1013. DOI: 10.3758/MC.37.7.1001
  12. 12. Weaver CA. Constraining factors in calibration of comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1990;16:214-222. DOI: 10.1037/0278-7393.16.2.214
  13. 13. Rawson KA, Dunlosky J, Thiede KW. The rereading effect: Metacomprehension accuracy improves across reading trials. Memory & Cognition. 2000;28:1004-1010. DOI: 10.3758/BF03209348
  14. 14. Koriat A. Monitoring one’s own knowledge during study: A cue-utilization approach to judgments of learning. Journal of Experimental Psychology: General. 1997;126:349-370. DOI: 10.1037/0096-3445.126.4.349
  15. 15. Kintsch W. The use of knowledge in discourse processing: A construction-integration model. Psychological Review. 1988;95:163-182. DOI: 10.1037/0033-295X.95.2.163
  16. 16. McNamara DS, Kintsch E, Songer NB, Kintsch W. Are good texts always better? Interactions of text coherence, background knowledge, and levels of understanding in learning from text. Cognition and Instruction. 1996;14:1-43. DOI: 10.1207/s1532690xci1401_1
  17. 17. Wiley J, Griffin TD, Thiede KW. Putting the comprehension in metacomprehension. Journal of General Psychology. 2005;132:408-428. DOI: 10.3200/GENP.132.4.408-428
  18. 18. Thiede KW, de Bruin ABH. Self-regulated learning in reading. In: Schunk DH, Greene JA, editors. Handbook of Self-Regulation of Learning and Performance. Third ed. New York, NY: Routledge Press; 2017. pp. 124-137
  19. 19. Redford JS, Thiede KW, Wiley J, Griffin TD. Concept mapping improves metacomprehension accuracy among 7th graders. Learning and Instruction. 2012;22:262-270. DOI: 10.1016/j.learninstruc.2011.10.007
  20. 20. Weinstein CE, Mayer RE. The teaching of learning strategies. In: Wittrock MC, editor. Handbook on Research in Teaching. Third ed. New York: Macmillan; 1986. pp. 315-327
  21. 21. Thiede KW, Griffin TD, Wiley J, Anderson MCM. Poor metacomprehension accuracy as a result of inappropriate cue use. Discourse Processes. 2010;47:331-362. DOI: 10.1080/01638530902959927
  22. 22. Chi MTH. Self-explaining expository texts: The dual processes of generating inferences and repairing mental models. In: Glaser R, editor. Advances in Instructional Psychology. Vol. 5. Mahwah, NJ: Lawrence Erlbaum Associates; 2009. pp. 161-238
  23. 23. Griffin TD, Wiley J, Thiede KW. Individual differences, rereading, and self-explanation: Concurrent processing and cue validity as constraints on metacomprehension accuracy. Memory & Cognition. 2008;36:93-103. DOI: 10.3758/MC.36.1.93
  24. 24. Glenberg ST, Epstein W, Morris C. Enhancing calibration of comprehension. Journal of Experimental Psychology: General. 1987;116:119-136. DOI: 10.1037//0096-3445.116.2.119
  25. 25. Fletcher CR, van den Broek P, Arthur EJ. A model of narrative comprehension and recall. In: Britton BK, Graesser AC, editors. Models of Understanding Text. Mahwah, NJ: Erlbaum; 1996. pp. 141-164
  26. 26. Thiede KW, Anderson MCM. Summarizing can improve metacomprehension accuracy. Contemporary Educational Psychology. 2003;28:129-160. DOI: 10.1016/S0361-476X(02)00011-5
  27. 27. Anderson MCM, Thiede KW. Why do delayed summaries improve metacomprehension accuracy? Acta Psychologica. 2008;128:110-118. DOI: 10.1016/j.actpsy.2007.10.006
  28. 28. Thiede KW, Anderson MCM, Therriault D. Accuracy of metacognitive monitoring affects learning of texts. Journal of Educational Psychology. 2003;95:66-73. DOI: 10.1037/0022-0663.95.1.66
  29. 29. Chen Q. Metacomprehension monitoring and regulation in reading. Acta Psychologica Sinica. 2009;41:676-683. DOI: 10.3724/SP.J.1041.2009.00676
  30. 30. Thiede KW, Dunlosky J, Griffin TD, Wiley J. Understanding the delayed keyword effect on metacomprehension accuracy. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2005;31:1267-1280. DOI: 10.1037/0278-7393.31.6.1267
  31. 31. de Bruin A, Thiede KW, Camp G, Redford JR. Generating keywords improves metacomprehension and self-regulation in elementary and middle school children. Journal of Experimental Child Psychology. 2011;109:294-310. DOI: 10.1016/j.jecp.2011.02.005
  32. 32. Thiede KW, Redford JS, Wiley J, Griffin TD. Elementary school experience with comprehension testing may influence metacomprehension accuracy among 7th and 8th graders. Journal of Educational Psychology. 2012;104:554-564. DOI: 10.1037/a0028660
  33. 33. van Loon MH, de Bruin ABH, van Gog T, van Merriënboer JJG, Dunlosky J. Can students evaluate their understanding of cause-and-effect relations? The effect of diagram completion on monitoring accuracy. Acta Psychologica. 2014;151:143-154. DOI: 10.1016/j.actpsy.2014.06.007
  34. 34. Leopond C, Leutner D. Science text comprehension: Drawing main idea selection, and summarizing as learning strategies. Learning and Instruction. 2012;22:16-26. DOI: 10.1016/j.learninstruc.2011.05.005
  35. 35. Leutner D, Leopold C, Sumfleth E. Cognitive load and science text comprehension: Effects of drawing and mentally imagining text content. Computers in Human Behavior. 2009;25:284-289. DOI: 10.1016/j.chb.2008.12.010
  36. 36. Van Meter P, Garner J. The promise and practice of learner-generated drawing: Literature review and synthesis. Educational Psychology Review. 2005;17:285-325. DOI: 10.1007/s10648-005-8136-3
  37. 37. Van Meter P, Firetto CM. Cognitive model of drawing construction: Learning through the construction of drawings. In: Schraw G, McCrudden MT, Robinson D, editors. Learning Through Visual Displays. Charlotte, NC: Information Age Publishing; 2013. pp. 247-280
  38. 38. Kostons D, de Koning BB. Does visualization affect monitoring accuracy, restudy choice, and comprehension scores of students in primary education? Contemporary Educational Psychology. 2017;51:1-10. DOI: 10.1016/j.cedpsych.2017.05.001
  39. 39. Thiede K, Wright KL, Wenner J, Hagenah S. Drawing improves metacomprehension accuracy. In: Paper presented at the Annual Meeting of the Psychonomic Society; 14-17 November 2018; New Orleans, LA
  40. 40. Schleinschok K, Eitel A, Scheiter K. Do drawing tasks improve monitoring and control during learning from text? Learning and Instruction. 2017;51:10-25. DOI: 10.1016/j.learninstruc.2017.02.002
  41. 41. Guo D, Wright KL, McTigue EM. A content analysis of visuals in elementary school textbooks. The Elementary School Journal. 2018;119:244-269. DOI: 10.1086/700266
  42. 42. Slough EW, McTigue EM, Kim S, Jennings SK. Science textbooks’ use of graphical representation: A descriptive analysis of four sixth grade science texts. Reading Psychology. 2010;31:301-325. DOI: 10.1080/02702710903256502
  43. 43. Van Meter P. Drawing construction as a strategy for learning from text. Journal of Educational Psychology. 2001;93:129-140. DOI: 10.1037//0022-0663.93.1.129
  44. 44. Thiede KW, Redford JS, Wiley J, Griffin TD. How restudy decisions affect overall comprehension for 7th grade students. British Journal of Educational Psychology. 2017;87:590-605. DOI: 10.1111/bjep.12166
  45. 45. Rawson KA, O’Neil R, Dunlosky J. Accurate monitoring leads to effective control and greater learning of patient education materials. Journal of Experimental Psychology: Applied. 2011;17:228-302. DOI: 10.1037/a0024749
  46. 46. Jaeger AJ, Velazquez MN, Dawdanow A, Shipley TF. Sketching and summarizing to reduce memory for seductive details in science text. Journal of Educational Psychology. 2018;110:899-916. DOI: 10.1037/edu0000254
  47. 47. Jaeger AJ, Wiley J. Do illustrations help or harm metacomprehension accuracy? Learning and Instruction. 2014;34:58-73. DOI: 10.1016/j.learninstruc.2014.08.002
  48. 48. Wiley J. Picture this! Effects of photographs, diagrams, animations, and sketching on learning and beliefs about learning from a geoscience text. Applied Cognitive Psychology. 2018;33:9-19. DOI: 10.1002/acp.3495

Written By

Keith Thiede, Katherine L. Wright, Sara Hagenah and Julianne Wenner

Submitted: 20 February 2019 Reviewed: 20 May 2019 Published: 19 June 2019