Open access peer-reviewed chapter

Worked Examples in Physics Games: Challenges in Integrating Proven Cognitive Scaffolds into Game Mechanics

Written By

Deanne Adams, Douglas B. Clark and Satyugjit Virk

Submitted: 28 October 2017 Reviewed: 02 November 2017 Published: 20 December 2017

DOI: 10.5772/intechopen.72152

From the Edited Volume

Simulation and Gaming

Edited by Dragan Cvetković

Chapter metrics overview

1,525 Chapter Downloads

View Full Metrics

Abstract

The current study is an exploratory study into the potential of integrating research on worked examples and physics games. Students were assigned to either a base version of a physics game, called the Fuzzy Chronicles, or assigned to a version of the Fuzzy Chronicles augmented with worked examples. Students in both conditions demonstrated significant gains on the pre-post-test, but students in the base game version demonstrated significantly greater gains than the students in the worked example version. The results from the current study reinforce results from other studies by our research group demonstrating how important it is that scaffolds based on multimedia research (a) do not over scaffold the student or promote passive, automatic behaviors, (b) do not excessively detract from the student’s gameplay time, and (c) do not disrupt game cognition and flow.

Keywords

  • game-based learning
  • worked examples
  • science education
  • game design
  • scaffolds
  • physics

1. Introduction

The current study is an exploratory study into the potential of integrating research on worked examples into physics games to support deeper learning. The theory behind worked examples is that working memory, which is limited in capacity, is heavily utilized when solving problems, such as setting subgoals that require highly focused cognition [1]. Problem solving has been shown to consume cognitive resources that could be better allocated for learning, integration and consolidation. Worked examples free cognitive resources in working memory for learning, specifically, for the assimilation of new knowledge by generative processing [2].

Many research studies have shown the advantages of learning from correct worked examples [1, 3, 4, 5, 6, 7, 8, 9, 10]. It is important however to be aware of the “expertise reversal effect” [11], which has shown through numerous studies that an instructional technique that benefits low prior knowledge learners can lose its benefit for, and in some cases be detrimental to, high prior knowledge learners.

Based on the research discussed above, worked examples can enhance learning in multimedia settings. The purpose of the current study was to examine the efficacy of an approach to integrating worked examples into physics learning games. The current study explores this question by comparing two conditions, one that integrates worked examples during into gameplay and one that does not, in order to explore four hypotheses:

  1. Students who experienced worked examples of the Fuzzy Chronicles game to support the comprehension and interpretation of game play were expected to show increased pretest-posttest gains compared to students who did not experience worked examples.

  2. Students who experienced worked examples would progress significantly farther in the game than the baseline group because they would have an enhanced understanding of gameplay and Newtonian physics concepts.

  3. Students in the worked examples condition would display patterns in their gameplay behavior indicating deeper conceptual sophistication than the students in the non-worked examples condition.

  4. These effects would be especially pronounced in low prior knowledge learners.

Advertisement

2. Methods

2.1. Participants

Participants consisted of 53 seventh grade students (F = 24, M = 29) from a Nashville area middle school. Twelve students were removed from the sample due to either missing the pretest (4), a day of gameplay (4), or the posttest (4). One student was also removed from the worked examples condition due to have a difference score over three standard deviations below the mean. This left a total of 40 students (F = 17, M = 23). A chi-squared analysis revealed equal numbers of males and females distributed between the two groups, X2 (1, N = 40) = 0.102, p = 0.75.

2.2. Materials

The game used for this study was an updated version of the conceptually-integrated educational physics game known as The Fuzzy Chronicles [12, 13, 14]. During the game, the player takes on the role of the space pilot Surge who is trying to help a group of aliens known as Fuzzies. Detailed descriptions of Fuzzy Chronicles are provided in Refs. [15, 16, 17].

Similar to versions of Fuzzy Chronicles use in other studies, each game level in the version used for the current study takes place on a grid with the goal of navigating from a launching point to a goal portal/door while avoiding obstacles. Unlike the previous version, the version of the game used for this experiment only included one level of difficulty for each mission to encourage students to progress through educational content of the game. The levels were broken up into five concepts based on Newton’s laws of motion and the game mechanics designed to teach those mechanics. The five concepts included: combination of forces (using rocket boosts), changes in mass (picking up Fuzzies), equal and opposite reactions in 1D (launching Fuzzies while not moving), equal and opposite reactions in 2D (launching Fuzzies while moving), and the law of inertia where an object in motion will stay in motion unless acted upon by an outside force (dropping Fuzzies).

There were 7 levels for each concept, making for a total of 35 levels. Each set of seven levels included one boss level that required the students to use all the skills they developed through the previous six levels to show mastery over the concept. For example, for the boss level for the concept of combining forces, students had to learn how to increase and decrease their speed as well as complete two 90° turns. For the experimental manipulation, the six non-boss levels in each set were grouped in pairs. The first level in each pair for the worked examples group was designed to be split into two isomorphic segments which are separated by a laser. During the first segment there is a preplaced trajectory and preplaced forces that take the ship from the starting launch point to the button that switches off the laser. The first segment thus demonstrates how to complete the target maneuver successfully, and serves as an example that students can use to help navigation from the laser button to the goal.

For the base game group, students are only given the second half of the level and are required to figure out how to complete the maneuver on their own using the basic tips given in the level introductory text. An example of the same level requiring students to move on a diagonal path for the control and the worked examples group can be found in Figure 1. For the second level in each pair, both the groups are given the same level which asks them complete a similar maneuver. A table with the level progression for the two groups can be found in Table 1.

Figure 1.

Example screen shots of a base game level (left) and the corresponding worked example level with the worked example in the first half of the level (right).

ConceptBase gameWorked example (WE)Level
Adding forces horizontallyPSWE1
PSPS2
Adding forces to create a diagonalPSWE3
PSPS4
90° turnsPSWE5
PSPS6
Force boss mission7
Adding massPSWE8
PSPS9
Moving and stopping with massPSWE10
PSPS11
Moving diagonally with massPSWE12
PSPS13
Mass boss mission14
Basic launchingPSWE15
PSPS16
Launching a set speedPSWE17
PSPS18
Double launchingPSWE19
PSPS20
Launching while stopped boss21
45° deflectionPSWE22
PSPS23
Deflection to straighten pathPSWE24
PSPS25
Double DeflectionPSWE26
PSPS27
Launching on the move boss mission28
Basic droppingPSWE29
PSPS30
Mass changes with droppingPSWE31
PSPS32
Dropping and launchingPSWE33
PSPS34
Dropping boss mission35

Table 1.

Level progression for base game and worked example groups.

The pretest and posttest consisted of 18 multiple choice questions. Eight of the items from the test dealt with changes in acceleration in 1D with subsets of those items dealing with: (2) the relationship between force and acceleration when mass is unchanged, (2) balanced forces, and (4) combining forces. Four of the items dealt with adding forces together to create 2D movement. Three of the test items were focused on the F = MA relationship, highlighting changes in mass. Finally, the last three questions dealt with the 3rd law of motion (for every action there is an equal and opposite reaction) and required the students to imagine how throwing an object would affect the speed and direction of another object.

In addition to addressing whether students can learn from the game, this study also examines students’ perceptions of the game in terms of enjoyment, difficulty, cognitive effort, and perceived learning. To address this goal, a seven-item game evaluation survey with a five point Likert scale ranging from “1 Strongly Agree” to “5 Strongly Disagree” was developed.

A second survey was administered to determine the student’s level of video game experience. Students were asked how many hours a week they typically played video games, which video game consoles and portable video game devices their families owned and they played regularly, whether they played games on a desktop or laptop computer, and whether they regularly played games on a smartphone or tablet.

Finally, to assess the students’ self-efficacy in terms of playing video and computer games, the video gaming subset of items from the Self-Efficacy in Technology and Sciences (SETS) instrument was used [18]. The instrument was included in order to determine whether self-efficacy while playing games could affect the student’s play style or if the different versions of the game were more helpful to students with lower video game self-efficacy.

2.3. Procedure

Ten days prior to playing the game, students were given the pretest to determine prior knowledge levels related to the learning goals of the game. Students were given as much time as they needed in order to complete the test. All students were given two full class periods, (90 min) to play the game. On the first day of gameplay students were introduced to the WISE system and given instructions on how to set up their personal game accounts. Students were given hard copy instructions related to their version of the game. The first activity students had to complete in the game was the 15-item video game self-efficacy survey [18]. Upon completing the survey, students were then taken to the in-game tutorial which instructed students on how to place trajectory points, place forces on the timeline, set the direction and magnitude of each force, and how to combined forces.

Students then played the game at their own pace. Students were told that they could assist other students at their lab table, but were instructed not to touch the other student’s computer. The researchers gave minimal help in terms of game instructions and were instructed to refrain from giving any physics related assistance. The posttest was administered on Friday at the end of the week. Due to the alternating block scheduling, half of the students completed the test 1 day after finishing gameplay while the other half completed the test 2 days after finishing gameplay. After the posttest, students were asked to complete the game evaluation survey and the gaming experience survey.

Advertisement

3. Results

3.1. Learning gains results

Due to a typo on the pretest materials, one question relating to mass was removed from the analysis for both the pre and posttest. There were no significant differences between the two groups on the pretest t (38) = 0.95, p = 0.35. A repeated measures ANOVA examining testing session (pre vs. post) X group (base game vs. worked example), revealed that there was a significant main effect of testing session, F(1, 36) = 73.30, p < 0.001, d = 1.27, with students performing significantly better on the posttest. Students had a mean gain of 4.28 points (SD = 3.21). There was also a significant interaction between testing session and group, F(1, 36) = 4.38, p = .04, with the base game group showing significantly higher gains between the two testing sessions compared to the worked example group. Table 2 contains the means and standard deviations for both groups for both the pretest and the posttest.

ConditionPretestPosttest
M (SD)M (SD)
Base game7.75 (2.61)12.95 (3.72)
Worked examples8.60 (3.03)11.95 (3.94)
Total8.18 (2.83)12.45 (3.82)

Table 2.

Means and Standard Deviations for Worked Examples and Base Game groups on Pretest and Posttest.

In addition to looking at overall learning gains, an additional repeated measures ANOVA was conducted to examine which concepts showed gains between the pre and posttest as well as significant differences in gains between the two conditions. For questions dealing with 1D and changes in acceleration, there was a significant main effect of testing session for 1D questions in general, F (1, 38) = 26.65, p < 0.001. Within those questions, there was a significant main effect for questions dealing with balanced forces, F (1, 38) = 19.37, p < 0.001, and questions dealing with combining forces, F (1, 38) = 19.08, p < 0.001. There was no significant difference for questions dealing with the relationship between force and acceleration, F (1, 38) = 0.06, p = 0.80. Students also had significant increases between the pre and posttest for questions dealing with adding forces for 2D movement, F (1, 38) = 38.75, p < 0.001, questions dealing with mass, F (1, 38) = 11.56, p = 0.002, and 3rd law throwing questions, F (1, 39) = 29.95, p < 0.001. Looking at pretest/posttest differences between the two groups, there was a significant interaction between testing session and condition for questions dealing with dealing with 2D movement, F (1, 38) = 4.64, p = 0.04, with the base game group showing larger test gains compared to the worked examples group.

Although there was no significant benefit for providing worked examples in terms of learning gains, one possibility is that worked examples could have been more effective for lower prior knowledge players. Students were ranked as high and low prior knowledge learners using a median split (18 low, 22 high). A chi-squared analysis revealed that there was no significant difference in distribution of high and low ranked prior knowledge individuals between the two conditions, X2(1, N = 40) = 0.404, p = 0.53. A MANOVA looking at gain scores between the pre and posttest showed no significant effect of prior knowledge ranking, F (1, 36) = 0.01, p = 0.93, as well as no significant interaction between condition and prior knowledge rank, F (1, 37) = 0.08, p = 0.78. This suggests that regardless of prior physics knowledge, as measured by our pretest, participants did significantly better when they were in the base game group.

3.2. Game play analysis

Highest game level completed was used to determine how much of the game and learning content was experienced by the students. A significant positive relationship was found between pretest score and highest level completed, r(40) = 0.48, p = 0.002. There was also a significant positive correlation between the highest level completed during the game and learning gains, r(40) = 0.33, p = 0.04 suggesting that progression further in the game also helped students learn more. The strongest correlation was between posttest score and highest level completed, r(40) = 0.63, p < 0.001. Together these results indicate that prior knowledge helped students to progress through the game and that progressing further also helped students improve their scores between the two testing sessions.

To examine whether differences in the number of levels completed affected learning differences between the two groups an ANOVA was conducted. There was significant difference between the two conditions in terms of the average highest level completed by the participants, F (1, 36) = 5.02, p = 0.03, with participants in the base game group tending to complete more levels (M = 19.10, SD = 7.21) than participants in the worked examples group (M = 15.20, SD = 8.61). Similar to the work of Sweller, this project was also interested in how the two conditions affected play style. To explore this question, the average number of attempts made for each level was computed for the levels that did not include a worked example and was the same across the two conditions (the second level in each pair). This was done for just the first 3 non-worked example levels since all participants had completed those levels. There were no significant differences between the two conditions for average overall attempts, t (31.24) = −.55, p = 0.59, although there was unequal variance between the two conditions with the worked examples group showing a larger variance in attempts (M = 18.92, SD = 17.38) compared to the base game group (M = 16.43, SD = 10.50).

3.3. Game satisfaction survey

The game satisfaction survey revealed no significant differences between the two groups in terms of whether students enjoyed playing the game (p = 0.38), found the game difficult to play (p = 0.87), found the interface confusing, (p = 0.73), would play again (p = 0.27), or reported working hard to complete game missions (p = 0.57). In terms of perceived physics learning, there were no significant differences between the two groups in whether they thought they learned a lot about physics from the game (p = 0.21) or thought that the game helped them understand physics lessons they had learned in class (p = 0.19).

The distribution of responses for all of the participants can be found in Table 3. In general, 65% of the students reported that they strongly agreed or agreed that they enjoyed playing the game while only one student said they disagreed. Over half of the students (60%) reported that they agreed or strongly agreed that they would like to play the game or games like it again in the future. In terms of mental effort, 80.4% reported that they worked hard to understand how to play the game and to complete the missions. For difficulty dealing with the game, 32.5% of the students either agreed or strongly agree that they found the game difficult to play, 30% said they neither agreed nor disagreed, while 37.5% said they did not agree with the game being difficult to play. Only 12.5% of the students found interacting with the game to be difficult while the majority were neutral (42.5%). For physics learning, the majority of students agreed to some degree that they learned about physics from playing the game (72.5%) and also thought that the game helped them understand physics lessons they had learned in class (72.5%).

QuestionSurvey response
Strongly agree (%)Agree (%)Neutral (%)Disagree (%)Strong disagree (%)
Liked playing game35.0030.00332.502.500
Play again37.5022.5032.502.505.00
Worked hard35.0047.5012.505.000
Game difficult to play5.0027.5030.0035.002.50
Interacting with game was confusing2.5010.0042.5035.0010.00
Learned about physics42.5030.0025.002.500
Helped understand class lessons40.0032.5020.007.500

Table 3.

Student survey responses.

3.4. Video game self-efficacy

The analyses focus on the video game self-efficacy subscale due to the lack of significant relationship between any of the learning measures and the computer gaming self-efficacy scale. There was no significant difference in reported video game self-efficacy scores between the two groups, t(38) = −1.61, p = 0.12. In addition, there was no significant interaction between game version and self-efficacy ranking (high vs. low created using a median split) on learning gains, F (1, 36) = 0.01, p = 0.91. In general, there was a significant positive relationship between video game self-efficacy and performance on the posttest, r(40) = 0.33, p = 0.04, although there was no significant relationship with either the pretest or learning gains. There was also a significant positive relationship between the video game self-efficacy score and the highest game level completed, r(40) = 0.38, p = 0.02, as well as a significant negative relationship with the number of attempts made on the non-worked example levels, r(40) = −0.37, p = 0.02. This suggests that students with higher self-efficacy were less likely to just use trial and error to solve the levels and completed more levels as a result. This is further supported by a significant positive relationship between perceived game difficulty, r(40) = 0.44, p = 0.005, and finding the game confusing, r(40) = 0.51, p = 0.001, with video game self-efficacy. Students with lower self-efficacy were more likely to agree that they found the game confusing to interact with and difficult to play. However, as students’ self-efficacy increased they were more like to say they liked the game, r(40) = −0.52, p < 0.001, and would play the game again, r(40) = −0.34, p = 0.03. Most importantly, as students’ self-efficacy increased, students were more likely to report that they learned physics from the game, r(40) = −0.37, p = 0.02, and that the game helped them understand lessons from class, r(40) = −0.50, p = 0.001.

Advertisement

4. Discussion

The highest level completed by each student correlated with how much game and learning content the students experienced while interacting with The Fuzzy Chronicles. The measured pretest score served as a positive indicator for how far students were projected to advance in the game, suggesting that students’ prior knowledge was a significant component to the number of levels students were able to complete. Pretest scores did not differ between students in the worked example condition and the baseline condition, which suggests that the conditions were balanced in terms of prior knowledge. Students who completed more levels also showed higher learning gains.

Overall, the baseline condition proved to be more beneficial for both high and low prior knowledge learners compared to the outcome of student performance in the worked example condition. This finding was contradictory to our expectation that low prior knowledge participants would perform better after playing through the worked example condition. Highest level completed also correlated positively with students’ post test scores, indicating that game play was correlated with increasing or facilitating understanding of game play content. The average highest game level completed differed significantly between the two conditions, with baseline students completing more levels than their counterparts in the worked level condition. This result was unexpected. If anything, we would have expected the same or less time to complete each level in the worked example condition. Instead, however, somehow the inclusion of the worked examples impeded level progression and actually had a negative effect on student performance overall. No significant difference was found between worked example and baseline conditions regarding the overall number of attempts, suggesting that game play behavior between conditions was equivalent even though the amount of level completion between conditions differed.

Self-efficacy of the students was analyzed to determine how individual judgments of performance ability affected game play. Self-efficacy differences were not seen between the two groups and self-efficacy did not appear to have a significant influence on pretest scores or learning gains. However, self-efficacy was revealed to have a significant positive relationship with posttest scores and highest game level completed indicating that the students’ perception of their ability to understand the game while playing the game could have influenced their performance and engagement. Self-efficacy and number of attempts made on non-worked example levels were inversely related, suggesting that students with a higher degree of self-efficacy were less likely to approach the levels by trial and error. Students with lower self-efficacy scores were also more likely to perceive the game as confusing and difficult to play, whereas students with higher self-efficacy reported that they enjoyed the game and would play it again. Self-efficacy scores were also positively correlated with students indicating they learned physics concepts from the game and that game help to reinforce content from class. Self-efficacy may have influenced how students perceived the value of the game.

Advertisement

5. Conclusions

The findings show that students in both the base game condition and the worked example condition demonstrated significant pre-post gains. In an earlier study, we included a null condition with only a pretest and posttest but no intervention to determine whether a test/retest phenomenon could account for gains without an intervention [17]. That study showed that gains on the test could not be attributed simply to a test/retest effect. We therefore believe the significant pre-post gains in both conditions in the current study to demonstrate the overall efficacy of the approach enacted in Fuzzy Chronicles. Newtonian concepts can be very challenging for students, and are often resistant to instruction [19, 20]. We are pleased that these findings are in line with the overarching disciplinary integrated ideas of the Fuzzy Chronicles.

The findings from the worked examples condition, however, do not support our hypotheses. While these findings are disappointing, we have encountered similar patterns in our prior research when we have attempted to integrate well-documented principles about scaffolding from psychology and cognitive science into digital games for learning. Our research has shown that when scaffolding functionality comes at the expense of time spent in gameplay, it can detract from game cognition and STEM learning [15, 21]. Adams and Clark’s findings demonstrated that self-explanation prompts slowed students in the prompt condition so that the students completed significantly fewer levels and scored significantly lower on a learning assessment [15]. Looking across those studies and the current study, we note that the efficacy of implementing well-documented multimedia principles of learning in STEM games may not enhance learning if the design interferes with students’ flow, cognitive load, or engagement with the game mechanics. In particular, results suggested that when the worked example approach is overemphasized in a STEM game, it can disrupt or possibly over scaffold learners, resulting in detrimental learning gains and gaming behavior. More specifically, across the current study and the other two studies to which we referred, we have repeatedly found that scaffolds based on multimedia research must not (a) over scaffold the student or promote passive, automatic behaviors, (b) excessively detract from the student’s gameplay time, and (c) disrupt game cognition and flow, especially the pace of flow.

This does not mean that these well-documented learning and scaffolding principles are incompatible in the design of digital games for learning. It simply means that refining designs that carefully integrate game mechanics and the design of the scaffolding requires careful iteration. In the case of the self-explanation functionality, for example, building on the findings of the Adams and Clark study, we redesigned the self-explanation functionality to adapt to students’ level of sophistication in working with abstract prompts [15]. We also adjusted the timing and frequency of the prompts so that the prompts only appear after the player had successfully completed a level. By timing the prompts in this way, they were less intrusive and disruptive to players’ gameplay, and more likely to be appropriate to the player’s current progress and solution. Our research on this refined approach to self-explanation demonstrated significant pre-post learning gains as compared to a version without the self-explanation functionality [22]. Similarly, we believe that these findings imply the need to refine our approach to worked examples within gameplay rather than implying that worked examples are inappropriate for application in this setting. Essentially, we consider the findings as a reminder of the complexity of integrating scaffolds, an idea which has been developed in other educational contexts and applied to the context of digital games for learning.

Advertisement

Acknowledgments

The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, and the National Science Foundation through grants R305A110782 and 1119290 to Vanderbilt University. The opinions expressed are those of the authors and do not represent views of the Institute, the U.S. Department of Education, or the National Science Foundation.

References

  1. 1. Catrambone R. The subgoal learning model: Creating better examples so that students can solve novel problems. Journal of Experimental Psychology: General. 1998;127(4):355-376
  2. 2. Sweller J, Ayres P, Kalyuga S. Cognitive Load Theory. New York: Springer; 2011
  3. 3. Kalyuga S, Chandler P, Tuovinen J, Sweller J. When problem solving is superior to studying worked examples. Journal of Educational Psychology. 2001;93:579-588
  4. 4. McLaren BM, Lim S, Koedinger KR. When and how often should worked examples be given to students? New results and a summary of the current state of research. In: Proceedings of the 30th Annual Conference of the Cognitive Science Society. Austin: Cognitive Science Society; 2008. pp. 2176-2181
  5. 5. Paas F, van Merriënboer J. Variability of worked examples and transfer of geometrical problem- solving skills: A cognitive-load approach. Journal of Educational Psychology. 1994;86(1):122-133
  6. 6. Renkl A. The worked examples principle in multimedia learning. In: Mayer RE, editor. The Cambridge Handbook of Multimedia Learning. 2nd ed. New York: Cambridge University Press; 2014. pp. 391-412
  7. 7. Renkl A, Atkinson RK. Learning from worked-out examples and problem solving. In: Plass JL, Moreno R, Brünken R, editors. Cognitive Load Theory. Cambridge: Cambridge University Press; 2010
  8. 8. Schwonke R, Renkl A, Krieg C, Wittwer J, Aleven V, Salden R. The worked-example effect: Not an artifact of lousy control conditions. Computers in Human Behavior. 2009;25(2009):258-266
  9. 9. Sweller J, Cooper GA. The use of worked examples as a substitute for problem solving in learning algebra. Cognition and Instruction. 1985;2:59-89
  10. 10. Zhu X, Simon HA. Learning mathematics from examples and by doing. Cognition and Instruction. 1987;4(3):137-166
  11. 11. Kalyuga S. Expertise reversal effect and its implications for learner-tailored instruction. Educational Psychology Review. 2007;19(4):509-539
  12. 12. Clark DB. Designing Games to Help Players Articulate Productive Mental Models. Keynote commissioned for the Cyberlearning Research Summit 2012 hosted by SRI International, the National Geographic Society, and the Lawrence Hall of Science with funding from the National Science Foundation and the Bill and Melinda Gates Foundation. Washington, DC; 2012. http://www.youtu.be/xlMfk5rP9yI
  13. 13. Clark DB, Sengupta P, Brady C, Martinez-Garza M, Killingsworth S. Disciplinary integration in digital games for science learning. International STEM Education Journal. 2015;2(2):1-21. DOI: 10.1186/s40594-014-0014-4. http://www.stemeducationjournal.com/content/pdf/s40594-014-0014-4.pdf
  14. 14. Clark DB, Virk SS, Sengupta P, Brady C, Martinez-Garza M, Krinks K, Killingsworth S, Kinnebrew J, Biswas G, Barnes J, Minstrell J, Nelson B, Slack K, D'angelo CM. SURGE’s evolution deeper into formal representations: The siren’s call of popular game-play mechanics. International Journal of Designs for Learning. 2016;7(1):107-146. https://scholarworks.iu.edu/journals/index.php/ijdl/article/view/19359
  15. 15. Adams DM, Clark DB. Integrating self-explanation functionality into a complex game environment: Keeping gaming in motion. Computers in Education. 2014;73:149-159
  16. 16. Van Eaton G, Clark DB, Smith BE. Patterns of physics reasoning in face-to-face and online forum collaboration around a digital game. International Journal of Education in Mathematics, Science and Technology. 2015;3(1):1-13 http://ijemst.com/issues/3.1.1.Van_Eaton_Clark_Smith.pdf
  17. 17. Martinez-Garza M, Clark DB. Data-mining epistemic stances from raw game-play data. International Journal of Gaming and Computer-Mediated Simulations; in press
  18. 18. Ketelhut DJ. Assessing gaming, computer and scientific inquiry self-efficacy in a virtual environment. In: Annetta L, Bronack SC, editors. Serious Educational Game Assessment: Practical Methods and Models for Educational Games, Simulations, and Virtual Worlds. Sense Publishers; 2011. pp. 1-18. ISBN: 978-94-6091-329-7. DOI: 10.1007/978-94-6091-329-7
  19. 19. Hestenes D, Wells M, Swackhamer G. Force concept inventory. The Physics Teacher. 1992;30:141-158
  20. 20. Hestenes D, Halloun I. Interpreting the FCI. The Physics Teacher. 1995;33:502-506
  21. 21. Virk SS, Clark DB. Signaling in disciplinarily-integrated games: Challenges in integrating proven cognitive scaffolds within game mechanics to promote representational competence. In: Rud AG, Adesope O, editors. Contemporary Technologies in Education: Maximizing Student Engagement, Motivation, and Learning. Palgrave Macmillan; in press
  22. 22. Clark DB, Virk SS, Barnes J, Adams DM. Self-explanation and digital games: Adaptively increasing abstraction. Computers & Education. 2016;103:28-43. DOI: 10.1016/j.compedu.2016.09.010

Written By

Deanne Adams, Douglas B. Clark and Satyugjit Virk

Submitted: 28 October 2017 Reviewed: 02 November 2017 Published: 20 December 2017