COGNITION AND INSTRUCTION, 24(4), 565–591 Copyright © 2006, Lawrence Erlbaum Associates, Inc.

The Deep-Level-Reasoning-Question Effect: The Role of Dialogue and Deep-Level-Reasoning Questions During Vicarious Learning Scotty D. Craig, Jeremiah Sullins, Amy Witherspoon, and Barry Gholson The University of Memphis

We investigated the impact of dialogue and deep-level-reasoning questions on vicarious learning in 2 studies with undergraduates. In Experiment 1, participants learned material by interacting with AutoTutor or by viewing 1 of 4 vicarious learning conditions: a noninteractive recorded version of the AutoTutor dialogues, a dialogue with a deep-level-reasoning question preceding each sentence, a dialogue with a deep-level-reasoning question preceding half of the sentences, or a monologue. Learners in the condition where a deep-level-reasoning question preceded each sentence significantly outperformed those in the other 4 conditions. Experiment 2 included the same interactive and noninteractive recorded condition, along with 2 vicarious learning conditions involving deep-level-reasoning questions. Both deep-level-reasoning-question conditions significantly outperformed the other conditions. These findings provide evidence that deep-level-reasoning questions improve vicarious learning.

Recent advances in computer-based courses (Anderson, Corbett, Koedinger, & Pelletier, 1995; Derry & Potts, 1998; Holland, Kaplan, & Sams, 1995; Lesgold, Lajoie, Bunzo, & Eggan, 1992) and distance learning (Paulsen, 1995; Scardamalia et al., 1992) have created situations in which learners frequently find themselves trying to understand course content in settings in which they are observers (Brennan & Clark, 1996; Cox, McKendree, Tobin, Lee, & Mayes, 1999; Fox Tree, 1999; McKendree, Stenning, Mayes, Lee, & Cox, 1998; Schober & Clark, 1989) Correspondence should be addressed to Barry Gholson, Department of Psychology, The University of Memphis, Memphis TN 38152. E-mail: [email protected]

566

CRAIG, SULLINS, WITHERSPOON, GHOLSON

rather than addressees or active participants in the learning process. The new educational technologies present numerous challenges for researchers. That is, there is need for further understanding of the conditions that support knowledge acquisition in computerized educational settings in which learners are relatively isolated (e.g., Lee, Dineen, & McKendree, 1998; McKendree et al., 1998). Specifically, the question addressed in the current research is this: How can computer-based instruction be designed to support knowledge acquisition processes (Mayer, 1997) when learners cannot physically interact with, or control the content of, that which they are attempting to master?

OVERVIEW OF THIS ARTICLE This article is concerned with ways of supporting knowledge acquisition in vicarious learning environments. We begin with a section concerned with vicarious learning environments. This includes a subsection concerned with the role of question asking in those environments and a subsection on the self-explanation effect. The second section contrasts vicarious learning with interactive learning, with subsections on human tutoring and an intelligent tutoring system called AutoTutor. The third section explores theoretical foundations for the present research. This includes discussions of vicarious learning from standard dialogue and from dialogue that included deep-level-reasoning questions. Predictions derived from these frameworks are evaluated in two experiments reported in the fourth section. We conclude with a general discussion of implications of the two experiments, a description of limitations of the current research, and an exploration of issues that remain to be addressed empirically.

VICARIOUS LEARNING ENVIRONMENTS Vicarious learning environments are those in which the learners see or hear content for which they are not the addressees and have no way of physically interacting with the source of the content that they are attempting to master. The inception of vicarious learning in the psychology of human learning dates back to Bandura’s early work (1962) modeling aggression with children. Vicarious learning continued under such labels as observational learning and social learning (e.g., Bandura, 1977; Rosenthal & Zimmerman, 1978). More recent work, however, has investigated various manipulations designed to support learning processes during vicarious knowledge acquisition (Craig, Gholson, & Driscoll, 2002; Craig, Gholson, Ventura, Graesser, & Tutoring Research Group, 2000; Driscoll et al., 2003; Lee et al., 1998; McKendree, Good, & Lee, 2001; McNamara, Levinstein, & Boonthum, 2004).

DEEP-LEVEL-REASONING-QUESTION EFFECT

567

Question Asking in Vicarious Learning Environments It has long been known that question generation is one of the processing components that supports comprehension (Collins, Brown, & Larkin, 1980; Graesser, Singer, & Trabasso, 1994; Kintsch, 1998), problem solving, and reasoning (Graesser, Baggett, & Williams, 1996; Sternberg, 1987). Craig et al. (2000) have shown that it is relatively easy to implement strategies that promote the generation of deep-level-reasoning questions (Bloom, 1956; Graesser & Person, 1994) and knowledge acquisition among college students (e.g., Craig et al., 2000; Driscoll et al., 2003). Asking good questions leads to improved comprehension, learning, and memory of the materials among school children as well (e.g., Davey & McBride, 1986; Gavelek & Raphael, 1985; King 1989; King, Staffieri, & Adelgais, 1998; Palincsar & Brown, 1984). Craig et al. used vicarious learning procedures to efficiently induce question asking in a relatively brief period (about 30 min). Two male computer-controlled animated agents—a virtual tutor and a virtual tutee, located on opposite sides of a monitor—discussed a series of eight scripted computer literacy topics. A series of pictures, one relevant to each topic, were also on the monitor, located between the virtual tutor and virtual tutee. During acquisition, learners overheard the virtual tutee carry on a dialogue with the virtual tutor, or they overheard a monologue-like discourse. Each topic began with a brief information delivery by the virtual tutor. In the monologue-like condition, the virtual tutee then asked one broad question meant to provide a context for what followed, and the virtual tutor answered in a monologue-like discourse that presented all the information on that topic. In the dialogue condition, each brief information delivery was followed by a lively series of conversational exchanges. The virtual tutee asked a series of deep-level-reasoning questions, a total of 66 across the eight topics, and the virtual tutor immediately answered each. The deep-level-reasoning question frames were drawn from six categories in a question taxonomy presented by Graesser and Person (1994). The frames used were comparison, interpretation, causal antecedent, causal consequent, instrumental/procedural, and enablement. These question categories are closely tied to Bloom’s (1956) taxonomy of cognitive activities. The exact words, phrases, and sentences spoken by the virtual tutor in response to the virtual tutee’s questions were identical in the dialogue and monologue-like conditions on each topic. An sample information delivery and dialogue between the virtual tutor and virtual tutee discussing the operating system is included in Appendix A. An example of a question in the monologue-like condition for the same topic is “Why is it important to load the operating system, what does that have to do with how the programs perform?” Immediately following acquisition and before a transfer task, free-recall questions on the discourse content of two of the topics were administered. In the transfer task, the learners were presented with a series of eight new computer literacy topics and were given the opportunity to ask questions on each. They

568

CRAIG, SULLINS, WITHERSPOON, GHOLSON

were told that, at the outset of each topic, the tutor would deliver a brief information delivery and that they could direct queries to any information that would help them understand the topic. Only the computer-controlled virtual tutor and a picture relevant to the particular topic were on the monitor during transfer. After the first brief information delivery, the learners continued their queries, with the experimenter immediately answering each, until they said that they were finished with the topic. This activity was followed by a brief information delivery on the next topic, until all eight topics had been presented. Free-recall questions on the discourse content of two of the topics covered in transfer were then administered. The data yielded by the two free-recall questions administered following acquisition yielded an unexpected finding. Learners in the dialogue condition outperformed those in the monologue-like condition (dialogue, M = 23.8 propositions; monologue-like, M = 19.6 propositions). The difference was unexpected and marginally significant, but the effect size (Cohen’s d) was 0.44. In the transfer task, learners in the dialogue condition took significantly more conversational turns than those in the monologue-like condition (dialogue, M = 29.7; monologue-like, M = 19.0) and generated significantly more queries (dialogue, M = 36.6; monologue-like, M = 26.3). Another finding was that those in the dialogue condition generated a significantly greater proportion of questions that involved deep-level reasoning (Bloom, 1956; Graesser & Person, 1994) than did those in the monologue-like condition, about twice as many in terms of absolute number. Conversely, those in the monologue-like condition generated a significantly greater proportion of shallow-level reasoning questions than did those in the dialogue condition (Craig et al., 2000). In addition, in answering the free-recall questions following transfer, learners in the dialogue condition wrote significantly more content (M = 23.3 propositions) than did those in the monologue-like condition (M = 16.7 propositions). Clearly, then, the vicarious learning procedures used in acquisition in the dialogue condition were quite effective in inducing learners to take more conversational turns, ask many more deep-level-reasoning questions, and learn (i.e., recall) more content in the transfer task. Whether this difference was due to dialogue per se or to the deep-level-reasoning questions embedded in the dialogue remained to be determined. The unexpected finding—that those in the dialogue condition performed marginally better on the recall questions following acquisition than did those in the monologue-like condition—provided the framework for the next experiments in the series (Driscoll et al., 2003). Driscoll et al. (2003, Experiment 1) explored whether, given more precise measures, the marginally significant trend in the free-recall data following acquisition, in favor of those who overheard dialogue in the Craig et al. (2000) study, would prove more robust. In the latter study the between-subject variability was extreme, ranging from learners who wrote one or two brief sentences on each free-recall question to those who wrote more than a standard page (8 1/2 × 11 in. [22 × 28

DEEP-LEVEL-REASONING-QUESTION EFFECT

569

cm]) on each. Thus, to obtain more precise measures of the learners’ performances, discourse type (dialogue versus monologue-like) was a within-subjects variable in the Driscoll et al. experiments. Two computer-controlled male animated agents, a virtual tutor and virtual tutee, engaged in scripted dialogue and monologue-like discourse. Each vicarious learner overheard four computer literacy topics discussed in dialogue format and four in monologue-like format. In the monologue-like condition, the virtual tutee asked one broad question on each of the four topics, but in the dialogue condition he asked a total of 33 deep-level-reasoning questions. As in the Craig et al. (2000) study, the virtual tutor spoke the same words, phrases, and sentences in each condition. Following this exposure, each participant was given two free-recall test questions, one probing a topic overheard in dialogue format and the other probing a topic overheard in the monologue-like format. In the Craig et al. (2000) study, the data were scored for the total number of propositions written on each free-recall question, with no further evaluation of the informational content. It was possible, then, that learners in the dialogue condition wrote more but did not actually learn any more of the tutorial content than did those in the monologue-like condition. Thus, to add further precision, the free-recall data obtained by Driscoll et al. (2003) were classified into three categories: relevant, related, and irrelevant. Propositions written by learners on a given topic that matched or paraphrased those spoken by the virtual tutor on that specific topic were classified as relevant. Any propositions that matched or paraphrased the content spoken by the virtual tutor on any of the other seven topics were classified as related, that is, to the content of the topic being scored. Any other propositions written by the learner were classified as irrelevant—for example, metacognitive comments such as “I don’t know much about computers” or propositions concerning the tutorial contents that were false. Analyses of the data (Driscoll et al., 2003, Experiment 1) revealed that learners wrote significantly more relevant propositions on free-recall questions probing topics overheard in dialogue format (M = 15.1) than they wrote on questions probing topics overheard in monologue-like format (M = 8.8). There were no differences in the number of related or irrelevant propositions, with learners in each condition averaging about 2.5 of each kind. Driscoll et al. (2003) designed a second experiment to explore several features of dialogue that might have been responsible for the effects obtained in Experiment 1. The researchers deemed it possible that the virtual tutee’s questions in the dialogue condition might have improved vicarious learning because they (a) provided concept repetition (Fox Tree, 1999), (b) furnished signaling devices similar to headings in printed text (e.g., Hartley & Trueman, 1985; Loman & Mayer, 1983; Lorch & Lorch, 1995), (c) were questions per se (McKendree et al., 2001), or (d) were deep-level-reasoning questions that activated relevant concepts and provided a coherent context (Duffy, Shinjo, & Myers, 1990; Gernsbacher, 1997; Graesser,

570

CRAIG, SULLINS, WITHERSPOON, GHOLSON

Millis, & Zwann, 1997) that provided linkages to the content spoken by the virtual tutor on his next conversational turn. Thus, the study included four dialogue conditions as a between-subject variable, with discourse type (dialogue versus monologue-like) as a within-subject manipulation in each. In one dialogue condition, the virtual tutee’s contributions were all deep-level-reasoning questions (Bloom, 1956; Graesser & Person, 1994), and in a second condition they all involved shallow-level-reasoning questions (short answers, usually one word; see Graesser & Person, 1994). In a third condition, the shallow questions from the second condition were transformed into simple assertions spoken by the virtual tutee. In the fourth dialogue condition, the virtual tutee asked only one question per topic (as in the monologue-like condition), and the virtual tutor spoke the simple assertions spoken by the virtual tutee in the third condition. An attempt was made to provide information equivalence across the four conditions by priming the relevant concepts in each. Examples taken from a topic on the central processing unit (CPU) and computer speed—including the information delivery, deep-level-reasoning questions, shallow-level reasoning questions, and assertions—are given in Appendix B. Part of the rationale for the four conditions was that if concept repetition enhances vicarious learning, performance in all four dialogue conditions should exceed that exhibited in the monologue-like conditions. If the tutee’s contributions to the dialogue functioned as signals, similar to headings in printed text, then the deep-level-reasoning-questions condition, the shallow-level-questions condition, and the simple-assertions condition should produce differences when dialogue conditions are compared to the monologue-like conditions; but the simple assertions spoken by the virtual tutor should not. If questions per se facilitate vicarious learning from overheard dialogue, then the deep-level-reasoning-questions condition and the shallow-level-reasoning-questions condition should produce differences in favor of the dialogue conditions, whereas the other two conditions should not. Finally, if deep-level-reasoning questions were the key feature of the overheard dialogue in Experiment 1 of Driscoll et al. (2003), then only learners overhearing the deep-level-reasoning questions would be expected to show significantly enhanced vicarious learning when the dialogue conditions are compared to the monologue-like conditions. Only the analyses performed on the relevant propositions revealed significant effects, an interaction between discourse type and dialogue condition. Simple effect procedures performed on the propositions written on dialogue versus monologue-like discourse in each dialogue condition yielded a significant difference in only the deep-level-reasoning-questions condition. The mean number of relevant propositions that were written by learners in the deep-level-reasoning-questions condition on topics overheard in the dialogue condition was 17.33, whereas in the monologue-like condition the mean number was 11.37. Discourse type did not reach significance in any of the remaining dialogue conditions. The differences

DEEP-LEVEL-REASONING-QUESTION EFFECT

571

were small in those three conditions, slightly favoring the dialogue in two conditions and favoring the monologue-like condition in the other. Our interpretation of these results remains speculative at this time. First, it should be pointed out that the virtual tutee in each condition provided somewhat different perspectives on the content than that presented by the virtual tutor but that these perspectives were not the same across conditions. Second, shallow questions and assertions, although essentially but not completely specifying the same concepts and information in the virtual tutee’s contributions, drew on single isolated concepts. Third, and probably most important, the deep-level-reasoning questions asked about relations among the concepts in the questions and their relationships to concepts presented by the virtual tutor on his next conversational turn. These kinds of linkages are central to cognitive learning processes (e.g., Kintsch, 1998).

The Self-Explanation Effect Research has indicated that the cognitive activity of self-explanation supports knowledge acquisition. Chi, Bassok, Lewis, Reimann, and Glaser (1989; Chi, de Leew, Chiu, & LaVancher, 1994; Chi, Siler, Jeong, Yamauchi, & Hausmann, 2001) asked college students to engage in self-explanation tasks while learning physics concepts and principles. Learners first studied the prose sections of an introductory physics text. They were then given worked-out problems taken from the text. They were asked to explain aloud what they understood from reading each statement in the worked-out problems. The basic finding, which the authors coined the self-explanation effect, was that learners who generated more self-explanations (M = 15 per problem) while studying the examples correctly solved more transfer problems (82% correct) than did those who generated fewer self-explanations (M = 3 per problem, 46% correct). The self-explanation effect was replicated in other domains (Ferguson-Hessler & de Jong, 1990; Nathan, Mertz, & Ryan, 1994). Related support came from Webb (1989), who reviewed 19 studies involving students learning mathematics and computer science in small groups. Webb’s major findings were that student reception of elaborate explanations had little effect on achievement but that student generation of elaborate explanations raised achievement. The self-explanation effect also proved effective for younger learners. Chi et al. (1994), using a pretest-to-posttest design, presented eighth graders with a 101-sentence passage on how the circulatory system works. The sentences were presented one at a time, each on a separate page. Those in a self-explanation group were prompted to explain what each sentence meant. The prompts, given by the experimenter, were general, simply a reminder to explain what the sentence meant (Chi et al., 1994); that is, the prompts were content-free, including such questions as “What do you understand from this sentence?” and “What does this sentence mean

572

CRAIG, SULLINS, WITHERSPOON, GHOLSON

to you?” (Chi et al., 2001, p. 479). In an unprompted group, learners simply read through the 101-sentence passage a second time. Chi and colleagues (1994) initially attributed the self-explanation effect to elaborating on the information and drawing inferences. More recently, Chi (2000) expanded on the role of inferences generated during the process of self-explaining. Chi speculated that, while generating inferences, learners may detect a discrepancy between their own mental model/representation and the model conveyed by the text passage. After they detect a discrepancy, the learners reconstruct the activated mental model to bring it into better correspondence with the text model. An inference that links the current topic to prior knowledge, whether from earlier content in the text or from commonsense understanding, involves a reconstruction process. The activated mental model is brought into closer alignment with the new content provided by the text. These processes involve the kinds of constructive activities described by Bartlett (1932) in the context of the activation and reconstruction of schemas. Although we do not know if deep-level-reasoning questions promote self-explanations, we do believe that they lead to discrepancy detection and reconstruction of the current mental model. For an extended discussion of the relationship between the self-explanation effect and the deep-level-reasoning-questions effect, see Gholson and Craig (in press).

VICARIOUS VERSUS INTERACTIVE LEARNING One question that appears to have received little attention concerns how much knowledge vicarious learners acquire when compared to active participants (Chi et al., 2001) in educational environments that are designed to promote learning processes (e.g., Mayer, 2001; Sweller, 1999; Wittrock, 1990). In the research described here on vicarious learning (Craig et al., 2000; Driscoll et al., 2003), we include only conditions that involved scripted dialogue or scripted monologue-like discourse. That is, there were no interactive conditions. In interactive conditions, the content is not scripted, and learners, by their contributions to the dialogue, may alter what is presented and how it is presented by the tutor. Craig, Driscoll, and Gholson (2004) contrasted vicarious leaning with interactive learning in the context of an intelligent tutoring system called AutoTutor (described herein later). Human Tutoring There is substantial evidence that, when compared to those of classroom instruction, the gains from human tutoring are generally in the range of 0.4 to 2.0 standard deviation units, depending on the expertise of the tutors (Bloom, 1956; Cohen, Kulik, & Kulik, 1982; Corbett, 2001; Graesser & Person, 1994). Cohen et al. (1982) performed a meta-analysis on a large sample of studies that compared hu-

DEEP-LEVEL-REASONING-QUESTION EFFECT

573

man tutoring to various controls. Most of the tutors in the studies were untrained in tutoring skills and had only moderate domain knowledge. They were peer tutors, cross-age tutors, or paraprofessionals, not accomplished professionals. The average learning gain was 0.4 standard deviation units when compared to that of the various control conditions, such as those including rereading text or engaging in standard classroom activities. The 0.4 gain translates into about a half letter grade. Bloom (1984) reviewed evidence showing that, when compared to classroom controls, accomplished human tutors produce gains of nearly 2.0 standard deviation units, or about two letter grades. AutoTutor AutoTutor implements the tutoring strategies of paraprofessionals (Graesser & Person, 1994). These strategies mostly involve attempting to get the learner to fill in missing pieces of information, whereas AutoTutor attempts to fix any bugs and misconceptions that it detects (Graesser, Person, & Magliano, 1995). AutoTutor’s approach to tutoring was inspired by the cognitive approach to learning. Various versions of AutoTutor tutor college students on computer literacy and Newtonian physics. Learning gains obtained in tutoring sessions with AutoTutor generally range from 0.6 to 1.5 standard deviation units when compared to those of various controls. In fact, one version of AutoTutor (Newtonian physics) has been shown to produce pretest-to-posttest learning gains comparable to those produced by accomplished human tutors, physics professors who were experienced tutors (VanLehn et al., in press, Experiments 1 and 5). AutoTutor, which serves as a conversational partner with the learner, was constructed by the Tutoring Research Group at the University of Memphis. A male agent on the monitor displays facial expressions and some gesturing while conversing with the learner. AutoTutor begins each topic with an information delivery, followed by a question presented to the learner. For each topic, the Tutoring Research Group constructed an ideal answer to the question. This answer was then decomposed into a set of key concepts (sentences) called expectations. Using latent semantic analysis (e.g., Graesser et al., 2000; Graesser, Person, Harter, & Tutoring Research Group, 2001; Landauer & Dumais, 1997), AutoTutor assesses the learners’ progress by comparing their contributions to the content of the expectations. It builds on the learners’ contributions to the dialogue by ensuring that each expectation is covered on each topic. Once each expectation is covered, AutoTutor moves on to another expectation until all of them are covered for that particular topic. This stage is followed by a brief summary, before AutoTutor moves on to the next topic. As part of two large experiments, Craig et al. (2004, Experiments 1 and 2) contrasted pretest-to-posttest gains of learners who interacted directly with AutoTutor on 12 computer literacy topics with yoked controls in several vicarious learning

574

CRAIG, SULLINS, WITHERSPOON, GHOLSON

conditions (e.g., agent on monitor versus no agent on monitor). The 12 computer literacy topics included the following discussions: why random access memory (RAM) is important; what the CPU does; what random access disks have in common and how they differ from traditional storage devices; the equipment needed and how to send photographs over the Internet; the important characteristics of different CPUs; what a computer could accomplish without peripherals such as keyboards, monitors, or printers; how information typed on the keyboard eventually reaches the hard disk; how to upgrade RAM to handle larger applications; the advantages of parallel processing; the advantages of reduced instruction set computer technology over complex instruction set computer technology; the relationship between video resolution and bit depth on a multisync monitor; and why a computer infected with a virus would partially boot up. Learning gains were measured with two 4-foil 24-item multiple-choice tests. There were two deep-level-reasoning questions on each test that evaluated each of the 12 topics. For examples of these deep-level-reasoning questions, see Appendix C. The two multiple-choice tests were counterbalanced as pretest and posttest in each condition. The visual and auditory contributions of AutoTutor and the contributions of each learner were recorded and presented to a yoked learner in each vicarious condition. Although learners in the interactive conditions and the yoked vicarious conditions showed significant learning gains from pretest to posttest, those in the interactive conditions significantly outperformed those in the vicarious conditions in both Experiments 1 and 2.

THEORETICAL FOUNDATIONS FOR THE PRESENT RESEARCH Role of Dialogue per se in Supporting Vicarious Learning Researchers have provided guidance to modern work on learning, by pointing to the importance of dialogue in the learning process in such areas as human tutoring (Chi et al., 2001; Graesser & Person, 1994), intelligent tutoring systems (Graesser, McNamara, & VanLehn, 2005; VanLehn et al., in press), self-explanations (Chi et al., 1989; Hausmann & Chi, 2002), and classroom settings (King, 1994; Schuh, 2003). In fact, one claim is that dialogue is more important to learning than the medium that transmitted it (Graesser et al., 2003). Discourse theory suggests that many possible dialogue types are useful in the promotion of knowledge construction, collaborative problem solving (Palincsar & Brown, 1984), prompted self-explanations (Chi et al., 1994), and question asking/answering (Graesser & Person, 1994; Graesser et al., 1995) and that all are effective in promoting learning during tutoring. The collaborative theory of communication (Schober & Clark, 1989) argues that a person’s role in a dialogue is important to an understanding of the content of

DEEP-LEVEL-REASONING-QUESTION EFFECT

575

the dialogue. The theory states that participants in a dialogue collaborate with each other to establish grounding criterion, or a mutual belief that each addressee has understood what the speaker meant to a criterion sufficient for current purposes (Clark & Schaefer, 1989). This process improves the dialogue participants’ levels of understanding for the material covered. According to this theory, a person overhearing a dialogue but not participating in the dialogue as an addressee should be at a disadvantage relative to active participants in the dialogue. The reason is that dialogue participants could reach their grounding criterion and move on before the overhearer reached full understanding of the topic. Fox Tree (1999) extended the collaborative theory to examine overhearers’ (vicarious learners’) performances when presented with information in the form of monologues and dialogues. She stated that monologues and dialogues could increase understanding for overhearers. Monologues could help overhearers, because they are not tailored to individual learners and are intended for nonparticipating addressees. Dialogues could also facilitate the learning process, by bringing in multiple perspectives and multiple sources of information. Fox Tree pointed out that dialogues contain the point of view of each addressee, feedback to the speaker, and linkages to previous information. All three support knowledge acquisition processes. Also, in the case of someone who overhears a dialogue, the opportunity provides a second chance to capture the message of the dialogue from the addressee’s comments. Fox Tree (1999) went on to show that vicarious learners who overheard dialogues outperformed those who overheard monologues in a referential communication task. Other researchers (Cox et al., 1999; Lee et al., 1998; McKendree et al., 1998) have shown that those overhearing dialogues show considerable learning gains. In later sections, predictions based on the role of dialogue in learning are referred to as the dialogue per se hypothesis. Role of Question Asking in Learning During Dialogue Questions have often been pointed to as important tools to support learning from dialogue. Some theories of critical thinking even go as far as to attribute all statements made during learning as answers to questions, even if the questions are not explicitly asked (e.g., Paul & Elder, 2000). As indicated, researchers have well established that learning and comprehension scores increase when students are trained to ask good questions (Craig et al., 2000; King, 1989, 1994; Rosenshine, Meister, & Chapman, 1996). The construction–integration model of the comprehension of text and discourse (Kintsch, 1998; Kintsch & Welsch, 1991; Kintsch, Welsch, Schmalhofer, & Zimny, 1990; Otero & Kintsch, 1992) leads to an understanding of the links underlying learning, comprehension, and questions. According to this model, comprehension occurs during a two-step process. The first step is construction. During

576

CRAIG, SULLINS, WITHERSPOON, GHOLSON

construction, the concepts of the presented material are activated to form a type of network of active concepts. This network includes the syntactic, semantic, and world knowledge of the learner. The second phase is integration. In the integration phase, the links of the network of active concepts are strengthened by other similar concepts, whereas dissimilar concepts lose strength. This process of construction and integration forms the mental representation of the material. The integration of the new content with previous knowledge improves comprehension of the new information (Kintsch, 1988, 1998; McNamara & Kintsch, 1996). According to Kintsch’s model (1998), comprehension is a loosely structured, bottom-up process that is highly sensitive to context and easily adjusts to shifts in the environment. In fact, comprehension is chaotic until it reaches consciousness. It is modeled by a construction process that is weakly controlled and proceeds as a bottom-up process that is followed by constraint satisfaction with spreading activation that yields the coherence experienced in consciousness (Kintsch, 1998). We believe Kintsch’s model holds in mastering content in most comprehension tasks. However, we also believe that when content is preceded by deep-level-reasoning questions, activated mental models and concepts play an important role in regulating comprehension. To illustrate how deep-level-reasoning questions regulate construction and integration during comprehension in vicarious learning tasks, consider a typical example of a question on hardware and the answer that followed. Question: How is RAM used by the CPU when you are running an application program? In terms of Kintsch’s model, this question presumably activates a mental model containing a variety of concepts: RAM, the CPU, their relationship to each other, and their relationships to running application programs. Links form between these concepts and strengthen during the integration phase. Answer: RAM is used by the CPU for short-term memory storage in the CPU’s execution of programs. The already-existing strong links formed from the initial question facilitate the integration of links to the new concepts. Questions may play an important role in the knowledge acquisition process during vicarious learning (Kintsch, 1998). They can serve as guides to the activation of relevant concepts and mental models (Chi, 2000). These concepts can then help information integration by forming stronger bonds. In so doing, questions can produce better comprehension (or mental representations) of the material and thus better learning. In later sections, predictions based on the role of overhearing deep-level-reasoning questions as a feature of dialogue are referred to as the questions feature of dialogue hypothesis.

DEEP-LEVEL-REASONING-QUESTION EFFECT

577

EXPERIMENT 1 Experiment 1 was conducted to contrast the interactive and vicarious learning gains obtained by Craig et al. (2004; Experiments 1 and 2) with conditions that contrasted learning from monologue, interactive learning from dialogue per se, or deep-level-reasoning questions during dialogue. The aim of this experiment was to evaluate pretest-to-posttest learning gains across five instructional conditions. One of these conditions involved learners directly interacting with AutoTutor by carrying on a dialogue. In addition to the interactive condition, there were four vicarious conditions. In the interactive condition there was direct interactive dialogue between the learner and AutoTutor on the 12 topics concerned with computer literacy. The learner used a dialogue box and keyboard to respond to AutoTutor’s spoken questions, assertions, hints, prompts, pumps, back-channel feedback, and gestures. This condition, which included 26 participants, is simply called interactive. The video and audio of each interactive session were recorded. This recording included all content presented by AutoTutor, along with all student contributions to the dialog. Each recorded session was then presented to a yoked participant who simply watched and listened to it, as in the Craig et al. (2004) research. This condition included 24 participants and is called yoked–vicarious. A second vicarious learning condition involved presenting a monologue containing the content of the ideal answer plus the expectations to the learner on each of the 12 topics. Following the information delivery, the sentences were presented by the same voice engine and agent that were used in the interactive and yoked–vicarious conditions. The number of sentences in the ideal answers ranged from five to eight across the 12 computer literacy topics, with the same number of expectations included in each topic. Thus, learners were given a monologue presentation of between 10 and 16 sentences on each topic in this condition. Each topic concluded with the brief summary used in the interactive and yoked–vicarious conditions. The information deliveries and brief summaries (see AutoTutor section) were also presented in the two remaining vicarious conditions. This condition included 27 participants and is called monologue–vicarious. In a third vicarious learning condition, each sentence in the ideal answer was preceded by a deep-level-reasoning question that anticipated the content spoken by the agent on his next conversational turn. These questions, as part of a dialogue, were each designed to activate relevant concepts and provide a coherent context for the content of the sentence that followed. The questions were asked by a second, distinct voice engine, but only one agent, the same one used in the other conditions, was located on the monitor. After presentation of the sentences in the ideal answer, each preceded by a question, the expectation sentences were presented, but the expectation sentences were not preceded by questions. That is, the expectation sentences were presented in monologue format. This condition included 27

578

CRAIG, SULLINS, WITHERSPOON, GHOLSON

participants and is called half-questions–vicarious. In the final vicarious learning condition, each sentence in the ideal answer and the expectations was preceded by a deep-level-reasoning question as part of the dialogue. This condition included 23 participants and is called full-questions–vicarious. Predictions The dialogue per se hypothesis predicts that learners in three dialogue conditions (interactive, yoked–vicarious, full questions–vicarious) should all outperform those in the monologue condition. Furthermore, those in these three dialogue conditions should not differ from each other. It is not clear what the dialogue hypothesis predicts concerning the half-questions–vicarious condition. The questions feature of dialogue hypothesis predicts that vicarious learners overhearing dialogue in the full-questions–vicarious condition should outperform those in the interactive, yoked vicarious, and monologue vicarious conditions. These three groups should not differ from each other. It is unclear what the question-asking hypothesis predicts in the half-questions–vicarious condition.

METHOD Participants A total of 127 students, whose participation met a course requirement, were drawn from introductory psychology classes at the University of Memphis. Data from an additional 31 participants completed the study but were replaced because they exceeded a domain-knowledge criterion on a pretest (see Materials and Procedures section). This criterion was adopted because (a) some students in the pool had completed a college-level computer literacy course that this version of AutoTutor was designed to support and (b) previous research has shown that learning gains are greatest among those with low domain knowledge (e.g., Craig et al., 2002; Mayer, 2001; Moreno & Mayer, 1999). As indicated, the design included one interactive condition and four vicarious conditions. Materials and Procedures The experiment included paper-and-pencil and computerized materials. The paper-and-pencil materials, used to evaluate pretest-to-posttest learning gains, consisted of two multiple-choice tests. The two tests each comprised 24 four-foil deep-level-reasoning questions, 2 on each of the 12 topics covered in the session (see Appendix C for examples). The two tests have been shown to be equivalent in previous research (Craig et al., 2004). The data of any participant who exceeded a

DEEP-LEVEL-REASONING-QUESTION EFFECT

579

score of 9 on the pretest (chance is 6) were replaced. The order in which the tests were administered, as pretest or posttest, was counterbalanced across the participants in each condition, although the counterbalancing was not complete in the three conditions that included an odd number of participants. Each participant was randomly assigned to one of the five conditions upon arrival at the laboratory, although it was necessary to assign one to the interactive condition before the first could be assigned to the yoked–vicarious condition. Immediately after informed consent was obtained, the pretest was administered, followed by the computerized session, the posttest, and then debriefing. Four Pentium computer systems (with headphones) were used for program presentation. The computerized materials were produced using AutoTutor and two other applications, Xtrain (Hu, 1998) and Microsoft Agent 2.0. The latter two applications were used to script and control presentations in the monologue–vicarious, half-questions–vicarious, and full-questions–vicarious conditions. The duration of each student’s interactive session with AutoTutor varied somewhat, depending on the nature of the dialogue, as it did, of course, in the yoked–vicarious condition. These sessions were mostly between 35 to 40 min. The durations of the computerized sessions in the monologue–vicarious, half-questions–vicarious, and full-questions–vicarious conditions, were equated at about 30 min. This time frame was accomplished by placing brief pauses between sentences in all three conditions and further pauses in the monologue–vicarious and half-questions–vicarious conditions to correspond to the time required in the full-questions–vicarious condition. The entire session lasted about 1 hr in all five conditions.

RESULTS AND DISCUSSION Preliminary analyses revealed that there were no differences among the five instructional conditions on the pretest, F(4, 122) = 1.04. A one-way analysis of covariance (instruction condition: interactive versus yoked–vicarious versus monologue–vicarious versus half-questions–vicarious versus full-questions–vicarious) was used to evaluate learning gains, using pretest scores as covariates (Dunlop, Cortina, Vaslow, & Burke, 1996). It yielded a significant effect of instruction condition, F(4, 121) = 2.97, p < .03. Further analyses revealed that learners in the full-questions–vicarious condition significantly outperformed those in each of the other four conditions. The mean pretest scores, posttest scores, standard deviations, and pretest-to-posttest effect sizes (Cohen’s d) for each of the five conditions are presented in Table 1. As seen in the table, the pretest-to-posttest learning gains for the interactive, monologue vicarious, and half-questions–vicarious conditions were reasonably comparable (4.52 to 5.15), as were pretest-to-posttest effect sizes (d = 1.62 to 1.76), although performance in the

580

CRAIG, SULLINS, WITHERSPOON, GHOLSON

TABLE 1 The Means and Standard Deviations of Each Condition’s Pretest and Posttest Scores, Along With Each Condition’s Pretest-to-Posttest Effect Size (Cohen’s d) for Experiment 1 Pretest Condition Interactive Yoked vicarious Monologue vicarious Half-questions vicarious Full-questions vicarious

Posttest

M

SD

M

SD

Cohen’s d

5.65 6.46 5.96 6.37 6.70

1.87 2.43 2.16 1.80 1.87

10.62 9.83 11.11 10.89 13.30

3.63 3.73 3.52 3.52 4.30

1.72 1.07 1.76 1.62 1.99

yoked–vicarious condition was somewhat depressed (d = 1.07). Clearly, these findings provide more support for the questions feature of dialogue hypothesis than for the dialogue per se hypothesis. The failure to find a significant difference between the interactive condition and the yoked–vicarious condition was unexpected and failed to replicate the results of Craig et al. (2004). In the Craig et al. research described here, the mean change score from pretest to posttest across two experiments was 4.52 for the interactive conditions and 3.05 for the yoked vicarious, with effect sizes of 1.77 and 1.20, respectively. In the present research the corresponding change scores were comparable, 4.96 and 3.42, with with effect sizes of 1.72 and 1.07. Note that the power was greater in the Craig et al. research, with the main analysis involving 60 learners per group.

EXPERIMENT 2 The major finding of Experiment 1 concerns the impact of embedding deep-level-reasoning questions into educational content as part of dialogue, as shown by the pretest-to-posttest learning gains (6.70) and effect size (d = 1.99). The finding may be of important practical significance for computer-assisted instruction and distance learning. Thus, we deemed it necessary to partly replicate the finding in Experiment 2. In addition, we designed Experiment 2 to contrast deep-level-reasoning questions presented as dialogue with deep-level-reasoning questions presented as monologue. The design included four groups: interactive, yoked–vicarious, full-questions–vicarious with deep-level-reasoning questions presented as dialogue, and full-questions–vicarious with deep-level-reasoning questions presented as monologue. In the interactive condition, learners directly interacted with AutoTutor on the 12 topics concerned with computer literacy. The video and audio of the interac-

DEEP-LEVEL-REASONING-QUESTION EFFECT

581

tive sessions were recorded and presented to participants in the yoked–vicarious condition (see Experiment 1 for descriptions of these conditions). In the second and third vicarious conditions, each sentence in AutoTutor’s ideal answer and each expectation was preceded by a deep-level-reasoning question that anticipated the content spoken by the agent on his next conversational turn. In the deep-level-reasoning-questions monologe condition, the same agent and voice engine used in the interactive and vicarious interactive conditions spoke the questions and content. In the deep-level-reasoning-questions dialogue condition, the questions were asked by a second, distinct voice engine, but only one agent, the same used in the other conditions, was located on the monitor. The information deliveries and brief summaries presented by AutoTutor were also presented in these two vicarious conditions.

Predictions The standard dialogue per se hypothesis predicts that learners in the interactive, yoked–vicarious, and deep-level-reasoning-questions dialogue conditions should not differ from each other in terms of learning gains and that all should outperform those in the deep-level-reasoning-questions monologue condition. The questions feature of dialogue hypothesis predicts that learners in the deep-level-reasoning-questions dialogue and those in the deep-level-reasoning-questions monologue conditions should outperform those in the other two conditions. Furthermore, learners in the two deep-level-reasoning-questions conditions should not differ from each other.

METHOD Participants The participants in this study were 140 undergraduates, 35 per group, selected from the Department of Psychology’s subject pool at the University of Memphis. Only the data of students with low domain knowledge were included in the study. Participants not meeting this low-knowledge criterion were replaced. This criterion was identical to the one used in Experiment 1.

Materials Two 4-foil 24-item multiple-choice tests on computer literacy were used to evaluate learning gains. These were the tests used in Experiment 1. The counterbalancing procedures used in Experiment 1 were repeated.

582

CRAIG, SULLINS, WITHERSPOON, GHOLSON

Procedure When participants arrived at the laboratory, they were randomly assigned to experimental conditions. But before any learner in the yoked–vicarious condition could be assigned, a participant in the interactive condition needed to be tested. They then received informed consent, followed by the pretest. Although all participants completed the study, only the data of those with low domain knowledge were included in the data analyses (see Experiment 1). As in Experiment 1, this criterion was set at 9 or fewer correct on the four-foil 24-item pretests. During the session, participants in all conditions were seated alone in a room at a computer with headphones. After the computerized session was completed, participants received the posttest and debriefing.

RESULTS AND DISCUSSION A one-way analysis of variance (instruction condition: interactive versus yoked–vicarious versus deep-level-reasoning-questions dialogue versus deep-level-reasoning-questions monologue) performed on the pretest data yielded no significant effects, F(3, 136) = 1.85. Next, an analysis of covariance was performed on the posttest data using the pretest data as covariates. This analysis yielded a significant effect of instruction condition, F(3, 135) = 10.94, p < .001. Post hoc comparisons revealed that the two deep-level-reasoning-questions conditions significantly outperformed the interactive (p < .001) and the yoked–vicarious (p < .001) conditions. The mean pretest and posttest scores for each condition— along with standard deviations as well as pretest-to-posttest change scores and effect sizes (Cohen’s d)—are shown in Table 2. As seen in the table, the two deep-level-reasoning-question conditions yielded an average effect size of 2.11, whereas the mean for the other two conditions was 1.42. Neither the difference between the interactive and yoked–vicarious conditions nor the difference between TABLE 2 The Means and Standard Deviations of Each Condition’s Pretest and Posttest Scores, Along With Each Condition’s Pretest-to-Posttest Effect Size (Cohen’s d) for Experiment 2 Pretest Condition Interactive Yoked vicarious Deep-level reasoning questions monologue Deep-level reasoning questions dialogue

Posttest

M

SD

M

SD

Cohen’s d

6.09 6.57 6.03 6.91

2.23 1.79 1.93 1.76

9.80 9.57 12.14 13.66

2.82 3.14 4.01 3.76

1.59 1.25 1.95 2.29

DEEP-LEVEL-REASONING-QUESTION EFFECT

583

the deep-level-reasoning-questions dialogue condition and deep-level-reasoning-questions monologue condition approached significance.

GENERAL DISCUSSION Experiments 1 and 2 clearly showed the importance of deep-level-reasoning questions as a feature of dialogue during vicarious learning (Craig et al., 2000; King, 1989, 1994; Kintsch et al., 1990; Otero & Kintsch, 1992; Rosenshine et al., 1996) over dialogue per se (Fox Tree, 1999; Graesser et al., 2003; Graesser et al., 1995; Schober & Clark, 1989). Although the deep-level-reasoning questions, whether presented as part of dialogue or monologue, supported larger leaning gains than those of the other conditions, we do not dismiss the dialogue per se hypothesis. We believe that dialogue plays an important role in most learning and comprehension contexts. Indeed, learners in the deep-level-reasoning-questions monologue condition may well have interpreted the questions and the content that followed as dialogue. We currently know of no way to examine this possibility. The deep-level-reasoning questions embedded in the course content may have led to self-explaining, and it certainly activated incomplete mental models (Chi, 2000) that allowed learners to detect discrepancies between those models and the model presented by the text. The major finding from Experiments 1 and 2 concerns the impact of embedding deep-level-reasoning questions into educational content, as shown by the pretest-to-posttest learning gains and effect sizes averaging more than 2.0 (Cohen’s d) across the two experiments. These effect sizes are comparable to those produced by accomplished human tutors (Bloom, 1984). In concluding, we need to highlight some of the limitations in our current knowledge and some questions that need to be addressed by those concerned with distance learning and computer-based courses (Anderson et al., 1995; Graesser et al., 2000; McNamara et al., 2004; Scardemalia et al., 1992). First, to date, our research on dialogue and question asking involved the domain of computer literacy. Thus, it is necessary to explore the deep-level-reasoning-questions effect in other domains in future research. Second, the questions and the course content that followed each were spoken by voice engines in the current research. Would the same learning gains be achieved if the deep-level-reasoning questions were presented as on-screen printed text and embedded in course content that is also presented as on-screen printed text? Third, are all deep-level-reasoning-question category frames equal when it comes to supporting knowledge acquisition processes on the part of vicarious learners during dialogue? We drew our deep-level-reasoning questions unsystematically from a taxonomy of deep-level-question categories. As indicated, the question frames were drawn from six categories in a question taxonomy presented by Graesser and Person (1994). As also indicated, the

584

CRAIG, SULLINS, WITHERSPOON, GHOLSON

deep-level-reasoning-question frames were comparison, interpretation, causal antecedent, causal consequent, instrumental/procedural, and enablement. The question is, what specific features of which categories of deep-level-reasoning questions during dialogue best support knowledge acquisition processes during vicarious learning? Finally, once those features are identified, exactly how do they support cognitive learning processes, and what other kinds of dialogue moves involve these same features?

ACKNOWLEDGMENTS This research was supported by the Institute for Education Sciences (IES) Grant R305H0r0169. The Tutoring Research Group at the University of Memphis is an interdisciplinary research team comprised of approximately 35 researchers from psychology, computer science, physics, and education (visit http://www.autotutor. org). The research on AutoTutor was supported by the National Science Foundation (SBR 9720314, REC 0106965, REC 0126265, ITR 0325428) and the DOD Multidisciplinary University Research Initiative (MURI) administered by ONR under grant N00014-00-1-0600. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of IES (DOE), DOD, ONR, or NSF. Scotty D. Craig is currently at University of Pittsburgh, Learning Research and Development Center, 3939 O’Hara Street, Pittsburgh, PA 15260.

REFERENCES Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive tutors: Lessons learned. Journal of the Learning Sciences, 4, 167–207. Bandura, A. (1962). Social learning through imitation. In M. R. Jones (Ed.), Nebraska Symposium on Motivation (pp. 211–269). Lincoln: University of Nebraska Press. Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice Hall. Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. Cambridge, England: Cambridge University Press. Bloom, B. S. (1956). Taxonomy of education objectives: The classification of educational goals. Handbook I: Cognitive domain. New York: McKay. Bloom, B. S. (1984). The 2-sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13(6), 4–16. Brennan, S. E., & Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 1482–1493. Chi, M. T. H. (2000). Self-explaining expository texts: The dual process of generating inferences and repairing mental models. In R. Glaser (Ed.), Advances in instructional psychology (pp. 161–238). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145–182.

DEEP-LEVEL-REASONING-QUESTION EFFECT

585

Chi, M. T. H., de Leew, N., Chiu, M., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439–477. Chi, M. T. H., Siler, S. A., Jeong, H., Yamauchi, T., & Hausmann, R. G. (2001). Learning from human tutoring. Cognitive Science, 25, 471–533. Clark, H. H., & Schaefer, E. F. (1989). Contributing to discourse. Cognitive Science, 13, 259–254. Cohen, P. A., Kulik, J. A., & Kulik, C. C. (1982). Educational outcomes of tutoring: A meta-analysis of findings. American Educational Research Journal, 19, 237–248. Collins, A, Brown, J. S., & Larkin, K. M. (1980). Inference in text understanding. In R. J. Spiro, B. C. Bruce, & W. F. Brewer (Eds.), Theoretical issues in reading comprehension (pp. 385–407). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Corbett, A. T. (2001). Cognitive computer tutors: Solving the two-sigma problem. In M. Bauer, P. I. Gmytrasiewicz, & J. Vassileva (Eds.), User modeling 2001: 8th International Conference (pp. 137–147). Hieldenberg, Berlin: Springer Verlag. Cox, R., McKendree, J., Tobin, R., Lee, J., & Mayes, T. (1999). Vicarious learning from dialogue and discourse. Instructional Science, 27, 431–458. Craig, S. D., Driscoll, D., & Gholson, B. (2004). Constructing knowledge from dialogue in an intelligent tutoring system: Interactive learning, vicarious learning, and pedagogical agents. Journal of Educational Multimedia and Hypermedia, 13, 163–183. Craig, S. D., Gholson, B., & Driscoll, D., (2002). Animated pedagogical agents in multimedia educational environments: Effects of agent properties, picture features, and redundancy. Journal of Educational Psychology, 94, 428–434. Craig, S. D., Gholson, B., Ventura, M., Graesser, A. C., & Tutoring Research Group. (2000). Overhearing dialogues and monologues in virtual tutoring sessions: Effects on questioning and vicarious learning. International Journal of Artificial Intelligence in Education, 11, 242–253. Davey, B., & McBride, S. (1986). Effects of question generating training on reading comprehension. Journal of Educational Psychology, 78, 256–262. Derry, S. J., & Potts, M. K. (1998). How tutors model students: A study of personal constructs in adaptive tutoring. American Educational Research Journal, 35, 65–99. Driscoll, D., Craig, S. D., Gholson, B., Ventura, M., Hu, X., & Graesser, A. C. (2003). Vicarious learning: Effects of overhearing dialogueue and monologue-like discourse in a virtual tutoring session. Journal of Educational Computing Research, 29, 431–450. Duffy, S. A., Shinjo, M., & Myers, J. L. (1990). The effect of encoding task on memory for sentence pairs varying in causal relatedness. Journal of Memory and Language, 29, 27–42. Dunlop, W. P., Cortina, J. M., Vaslow, J. B., & Burke, M. J. (1996). Meta-analysis of experiments with matched groups or repeated measures designs. Psychological Methods, 1, 170–177. Ferguson-Hessler, M. G. M., & de Jong, T. (1990). Studying physics texts: Differences in study processes between good and poor solvers. Cognition and Instruction, 7, 41–54. Fox Tree, J. E. (1999). Listening in on monologues and dialogue. Discourse Processes, 27, 35–53. Gavelek, J. R., & Raphael, T. E. (1985). Metacognition, instruction, and the role of questioning activities. In D. L. Forrest-Presslet, G. E. MacKinnin, & T. G. Waller (Eds.), Metacognition, cognitive, and human performance (Vol. 2, pp. 103–136). Orlando, FL: Academic Press. Gernsbacher, M. A. (1997). Two decades of structure building. Discourse Processes, 23, 265–304. Gholson, B., & Craig, S. D. (in press). Promoting constructive activities that support learning during computer-based instruction. Educational Psychology Review. Graesser, A. C., Baggett, W., & Williams, K. (1996). Question-driven explanatory reasoning. Applied Cognitive Psychology, 10, S17–S32. Graesser, A. C., McNamara, D. S., & VanLehn, K. (2005). Scaffolding deep comprehension strategies through Point&Query, AutoTutor, and iStart. Educational Psychologist, 40, 225–234. Graesser, A. C., Millis, K. K., & Zwaan, R. A. (1997). Discourse comprehension. In J. T. Spence, J. M. Darley, & D. J. Foss (Eds.), Annual review of psychology (pp. 163–189). Palo Alto, CA: Annual Reviews.

586

CRAIG, SULLINS, WITHERSPOON, GHOLSON

Graesser, A. C., Moreno, K., Marineau, J., Adcock, A., Olney, A., & Person, N. (2003). AutoTutor improves deep learning of computer literacy: Is it the dialogue or the talking head? In U. Hoppe, F. Verdejo, & J. Kay (Eds.), Proceedings of artificial intelligence in education (pp. 47–54). Amsterdam: IOS Press. Graesser, A. C., & Person, N. (1994). Question asking during tutoring. American Educational Research Journal, 31, 104–137. Graesser, A. C., Person, N., Harter, D., & Tutoring Research Group. (2001). Teaching tactics and dialogue in AutoTutor. International Journal of Artificial Intelligence in Education, 12, 257–279. Graesser, A. C., Person, N. K., & Magliano, J. P. (1995). Collaborative dialogue patterns in naturalistic one-to-one tutoring. Applied Cognitive Psychology, 9, 495–522. Graesser, A. C., Singer, M., & Trabasso, T. (1994). Constructing inferences during narrative text comprehension. Psychological Review, 3, 371–395. Graesser, A., Wiemer-Hastings, P., Wiemer-Hastings, K., Harter, D., Person, N., & Tutoring Research Group. (2000). Using latent semantic analysis to evaluate the contributions of students in AutoTutor. Interactive Learning Environments, 8, 149–169. Hartley, J., & Trueman, M. (1985). A research strategy for text designers: The role of headings. Instructional Science, 14, 99–155. Hausmann, R. G. M., & Chi, M. T. H. (2002). Can a computer interface support self-explaining? Cognitive Technology, 7, 4–14. Holland, V. M., Kaplan, J. D., & Sams, M. R. (Eds.). (1995). Intelligent language tutors. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Hu, X. (1998). Xtrain: Multimedia authoring and presentation software using Microsoft Agents [Computer software]. Available from [email protected] King, A. (1989). Effects of self-questioning training on college students’ comprehension of lectures. Contemporary Educational Psychology, 14, 366–381. King, A. (1994). Guiding knowledge construction in the classroom: Effect of teaching children how to question and explain. American Educational Research Journal, 31, 338–368. King, A., Staffieri, A., & Adelgais, A. (1998). Mutual peer tutoring: Effects of structuring tutorial interaction to scaffold peer learning. Journal of Educational Psychology, 90, 134–152. Kintsch, W. (1988). The use of knowledge in discourse processing: A construction–integration model. Psychological Review, 95, 163–182. Kintsch, W. (1998). Comprehension: A paradigm for cognition. New York: Cambridge University Press. Kintsch, W., & Welsch, D. M. (1991). The construction–integration model: A framework for studying memory for text. In W. E. Hockley & S. Lewandowsky (Eds.), Relating theory and data: Essay on human memory in honor of Bennett B. Murdock (pp. 367–385). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Kintsch, W., Welsch, D. M., Schmalhofer, F., & Zimny, S. (1990). Sentence memory: Aetical analysis. Journal of Memory and Language, 29, 133–159. Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104, 211–240. Lee, J., Dineen, F., & McKendree, J. (1998). Supporting student discussions: It isn’t just talk. Education and Information Technologies, 3, 17–29. Lesgold, A., Lajoie, S., Bunzo, M., & Eggan, G. (1992). SHERLOCK: A coached practice environment for an electronics troubleshooting job. In J. H. Larkin & R. W. Chabay (Eds.), Computer-assisted instruction and intelligent tutoring systems: Shared goals and complementary approaches (pp. 201–238). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Loman, N. L., & Mayer, R. E. (1983). Signaling techniques that increase the understandability of expository prose. Journal of Educational Psychology, 75, 402–412.

DEEP-LEVEL-REASONING-QUESTION EFFECT

587

Lorch, R. F., Jr., & Lorch, E. P. (1995). Effects of organizational signals on text processing strategies. Journal of Educational Psychology, 87, 537–544. Mayer, R. E. (1997). Multimedia learning: Are we asking the right questions? Educational Psychologist, 32, 1–19. Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press. McKendree, J., Good, J., & Lee, J. (2001, June). Effects of dialogue characteristics on performance of overhearers. Paper presented at the International Conference on Communication, Problem-Solving, and Learning, Glascow, Scotland. McKendree, J., Stenning, K., Mayes, T., Lee, J., & Cox, R. (1998). Why observing a dialogue may benefit learning. Journal of Computer Assisted Learning, 14, 110–119. McNamara, D. S., & Kintsch, W. (1996). Learning from text: Effect of prior knowledge and text coherence. Discourse Processes, 22, 247–288. McNamara, D. S., Levinstein, I. B., & Boonthum, C. (2004). iStart: Interactive strategy training for active reading and thinking. Behavior Research Methods, Instruments, and Computers, 36, 222–233. Moreno, R., & Mayer, R. E. (1999). Cognitive principles of multimedia learning: The role of modality and contiguity. Journal of Educational Psychology, 91, 358–368. Nathan, M. J., Mertz, K., & Ryan, R. (1994, April). Learning through self-explanation of mathematics examples: Effects of cognitive load. Poster session presentation at the American Educational Research Association annual meeting, New Orleans, LA. Otero, J., & Kintsch, W. (1992). Failures to detect contradictions in text: What readers believe vs. what they read. Psychological Science, 3, 229–234. Palincsar, A. S., & Brown, A. (1984). Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities. Cognition and Instruction, 1, 117–175. Paul, R. W., & Elder, L. (2000). Critical thanking handbook: Basic theory and instructional structures. Dillon Beach, CA: Foundation for Critical Thinking. Paulsen, M. F. (1995). Pedagogical techniques for computer-mediated communication. Retrieved May 26, 2006, from http://www.nettskolen.com/forskning/19/cmcped.html Rosenshine, B., Meister, C., & Chapman, S. (1996). Teaching students to ask questions: A review of intervention studies. Review of Educational Research, 66, 181–221. Rosenthal, R. L., & Zimmerman, B. J. (1978). Social learning and cognition. New York: Academic Press. Scardamalia, M., Bereiter, C., Brett, C., Burtis, P. J., Calhoun, C., & Smith-Lea, N. (1992). Educational applications of a network communal database. Interactive Learning Environments, 2, 47–71. Schober, M. F., & Clark, H. H. (1989). Understanding by addressees and overhearers. Cognitive Psychology, 21, 211–232. Schuh, K. L. (2003). Knowledge construction in the learner-centered classroom. Journal of Educational Psychology, 95, 426–442. Sternberg, R. J. (1987). Questioning and intelligence. Questing Exchange, 1, 11–13. Sweller, J. (1999). Instructional design in technical areas. Melbourne: Australian Council for Educational Research. VanLehn, K., Graesser, A. C., Jackson, G. T., Jordan, P., Olney, A., & Rosé, C. P. (in press). Natural language tutoring: A comparison of human tutors, computer tutors and text. Cognitive Science. Webb, N. M. (1989). Peer interaction and learning in small groups. International Journal of Education Research, 13, 21–39. Wittrock, M. C. (1990). Generative processes of comprehension. Educational Psychologist, 24, 345–376.

588

CRAIG, SULLINS, WITHERSPOON, GHOLSON

APPENDIX A Sample Information Delivery and Dialogue Topic: The Operating System Tutor information delivery: When you first turn on the computer, it loads an operating system. The operating system plays a vital role in how efficiently you can use the computer. Most of what you see on the screen when you use an application program and most of the common tasks you have the program perform are being performed by the operating system. Virtual tutee: Why does it load the operating system? What does that accomplish? Virtual tutor: The operating system is software that must be running in order for the computer to function. Virtual tutee: What does turning on the computer have to do with what I’ve heard referred to as booting up? Virtual tutor: When you turn the computer on, it immediately begins booting up. Virtual tutee: What does it mean to boot up? Virtual tutor: When you turn on the computer, it automatically begins executing instructions that help the system boot. Booting up involves getting the operating system loaded into part of the system’s memory. Virtual tutee: What happens after the operating system is loaded into memory? Virtual tutor: Once the operating system is loaded into memory, the computer is ready to respond to requests from the computer user. Virtual tutee: What would happen if you didn’t have an operating system? Virtual tutor: Well, you would have to provide input in terms of the zeros and ones of the binary language the computer hardware can read. Virtual tutee: Wow, what does it mean to say that all a computer’s hardware can work with is zeros and ones? Virtual tutor: What the operating system does is make the computer more user-friendly by avoiding the need to program in the binary language. It provides an additional layer of insulation between the user and computer. Virtual tutee: You said an additional layer of insulation. What other layers of insulation separate the user from the binary language of the hardware? Virtual tutor: Well let’s see, application programs like word processors, spreadsheets, or graphics packages are examples. They provide another layer between the user and the hardware, making computers easier to use, more user-friendly. Virtual tutee: What would happen if a computer didn’t have an operating system? Virtual tutor: The earliest computers were pretty primitive and didn’t have operating systems. Instead those using the computer had to set switches and patch wires between switches.

DEEP-LEVEL-REASONING-QUESTION EFFECT

589

Virtual tutee: That must have been a real nuisance. How did they get from there to operating systems? Virtual tutor: Actually the problem was overcome early in the development of computers. Virtual tutee: How was it overcome? Virtual tutor: John von Neumann laid the theoretical groundwork in 1945, and the first operating systems were being used by the fifties. Virtual tutee: How is it that the computer is able to load the operating system as soon as it is turned on? Virtual tutor: In most computers the operating system is stored on the hard disk. When the computer is turned on, automatic commands are immediately executed that locate the operating system on the hard disk and load it into memory. Virtual tutee: How do these automatic commands do that? Virtual tutor: Those commands are located on read-only memory, or ROM. For now let’s talk more about how the operating system makes it possible for the hardware to read input and give you output. I’ll bring up a new picture that includes translator programs.

APPENDIX B Sample Information Delivery and Discourse for Each Condition

Topic: The CPU and Computer Speed Virtual tutor, information delivery: There are many important issues to be considered when buying a computer. Compatibility is one important issue, but such issues as speed also need to be considered. Virtual tutee, dialogue, deep-level-reasoning-question condition: Why is speed important? Virtual tutee, dialogue, shallow-level-reasoning-question condition: Is speed important? Virtual tutee, dialogue, comments condition: Speed is important. Virtual tutor, all conditions: The faster the CPU, the faster it can process data, exchange information with RAM, and communicate with peripherals. Virtual tutee, dialogue, deep-level-reasoning-question condition: How can a manufacturer increase the speed of the computer? What can they do to make it faster? Virtual tutee, dialogue, shallow-level-reasoning-question condition: Can you increase the speed? Virtual tutee, dialogue, comments condition: You can increase the speed.

590

CRAIG, SULLINS, WITHERSPOON, GHOLSON

Virtual tutor, all conditions: Well, one thing manufacturers do is increase the clock speed of the computer. Virtual tutee, dialogue, deep-level-reasoning-question condition: But with a clock, you want to know how accurate it is, not how to make it go faster. Why would they want a clock to go faster, rather than increase its accuracy? Virtual tutee, dialogue, shallow-level-reasoning-question condition: Is clock speed important? Virtual tutee, dialogue, comments condition: Clock speed is important.

APPENDIX C Sample Four-Foil Questions Used for Pretests and Posttests 1. a. b. c. d.

What does the CPU use ROM for when executing programs? Short-term storage To hold instructions necessary to reboot Long-term storage To bypass the operating system

2. a. b. c. d.

What happens to RAM when an application program terminates? It lets the user know the program has terminated. Nothing happens to RAM. The program’s contents disappear. The operating system goes to the hard disk and ROM.

3. a. b. c. d.

How does the CPU know when you give a print command? The command goes from the monitor to the operating system to the CPU. The command goes from the monitor to RAM to the CPU. The command goes to the printer then the CPU. The command goes to the CPU then the printer.

4. a. b. c. d.

How does the CPU communicate with peripherals? Through the operating system and RAM Through RAM only Through the operating system only Through the CPU only

5. a. b. c. d.

How does the computer access data from a magnetic tape cartridge? The same way it does from a floppy disk The same way it does from cassette The same way it does from a compact disk The same way it does from a DVD

DEEP-LEVEL-REASONING-QUESTION EFFECT

6. a. b. c. d.

591

How does the computer locate information stored on a floppy disk? It searches very quickly from the beginning to the location. It searches very quickly from the last place data was stored. It searches very quickly directly to the location. It searches very quickly from the beginning of the disk.

7. You have an old picture of your grandmother that a relative in a different state wants you to send her over the Internet. What do you need to send it using a dial-up connection? a. A digital camera and a network card b. A scanner and a modem c. A scanner and a network card d. A digital camera, a scanner, and a network card 8. What are the advantages of a digital camera over a regular camera for sending photographs over the Internet? a. Digital photographs go directly to a disk. b. Digital photographs give much better bit depth. c. Digital photographs are easier to scan into the computer. d. Digital photographs give much better resolution. 9. If you buy a new application program and it runs very slowly, what is probably the problem? a. You have too many applications on the hard drive. b. You need to add RAM or virtual memory. c. You need a new CPU. d. You need a new computer.

The Deep-Level-Reasoning-Question Effect: The Role ...

tion asking in those environments and a subsection on the self-explanation effect. The second ... cal in the dialogue and monologue-like conditions on each topic. An sample ..... other applications, Xtrain (Hu, 1998) and Microsoft Agent 2.0.

129KB Sizes 0 Downloads 187 Views

Recommend Documents

The Normative Role of Knowledge
Saturday morning, rather than to wait in the long lines on Friday afternoon. Again ..... company, but rather the conditional intention to call the tree company if I.

Proteoglycans-of-the-periodontiurn_Structure-role-and-function.pdf ...
Page 3 of 14. Proteoglycans-of-the-periodontiurn_Structure-role-and-function.pdf. Proteoglycans-of-the-periodontiurn_Structure-role-and-function.pdf. Open.

The Role of Monetary Policy.pdf
Harris, Harry G. Johnson, Homer Jones, Jerry Jordan, David Meiselman, Allan H. Meltzer, Theodore W. Schultz, Anna J. Schwartz, Herbert Stein, George J.

the honeymoon effect pdf download
Download now. Click here if your download doesn't start automatically. Page 1 of 1. the honeymoon effect pdf download. the honeymoon effect pdf download.