In electronic learning environments, instructional designers can easily vary the control learners have over the learning environment. The intuitive appeal of learner control is that learners have the opportunity to adapt the learning materials to their own (cognitive) needs (Lawless and Brown 1997). This seems especially relevant for animated multimedia instructions, where learners have to integrate textual and pictorial information during a restricted amount of time. The high cognitive load associated with learning from this kind of instructional materials has stimulated a large amount of research aimed at reducing this load (e.g., Mayer 2001; Sweller 1999). One of the most prominent theories in the field, Richard Mayer’s theory of multimedia learning, has provided a range of guidelines for multimedia instructions that alleviate the cognitive load imposed on the learner and improve transfer performance (for an overview, see Mayer and Moreno 2003). In two studies, Mayer and his colleagues also looked at the cognitive benefits of introducing learner control in animated multimedia instructions (Mayer and Chandler 2001; Mayer et al. 2003). Results of both studies showed that giving learners some control over the presentation of the instructions resulted in higher scores on a transfer test, which according to the authors could be attributed to a reduction in cognitive load. So, giving learners control over an animated multimedia instruction seems an effective and easy-to-implement way to enhance learning.

Research outside Mayer’s theoretical framework however, has been less straightforward in demonstrating the benefits of learner control. First, several decades of research on learner control in computer-based instruction has produced a history of mixed results, as a number of reviews have amply demonstrated (e.g., Niemiec et al. 1996; Steinberg 1977, 1989; Williams 1996). As Reeves (1993) suggested, an important reason for the lack of consensus in this field was the absence of adequate theoretical models and the poor methodological quality of the empirical studies. Nevertheless, some general agreement existed that in most cases learner control was at best as effective as system control, and that individual differences on cognitive variables like prior knowledge and motivational variables like interest should be taken into account when considering the effects of learner control (e.g., Williams 1996).

Second, more recent research on the effectiveness of instructional animations has looked at the benefits of learner control as well, because several authors have argued that it might help learners deal with the complexity and transiency of dynamic visualizations (e.g., Tversky et al. 2002; Narayanan and Hegarty 1998, 2002). In support of this idea, Schwann and Riemp (2004) showed that learners watching an instructional video on tying nautical knots needed less time when they studied an interactive version of the video compared to learners studying a non-interactive version of the video. Also, Hasler et al. (2007) showed that providing learners the option to pause an animation on the day-night cycle improved transfer performance. Remarkably, in the Hasler et al. study learners hardly used the control options but still got better learning results, which Hasler et al. attributed to a deeper cognitive involvement of these learners. On the other hand, some other studies have found rather detrimental effects of giving learners control over the pace and order of an animation. For example, Lowe (1999, 2004) showed that novices studying an interactive animation on weather maps did not use the interactive features in a very effective way. Moreover, Schnotz et al. (1999) found that interactive animated pictures were less effective than static pictures and argued that the interactive features only increased cognitive load and did not foster learning. In sum, research on instructional animations has yet failed to provide uniform evidence for the assumed positive effects of learner control in instructional animations.

So research on both computer-based instructions as well as on instructional animations has at best been equivocal about the benefits of learner control. That warrants a closer look at both the theoretical rationale as well as the set up of the two studies by Mayer and his colleagues, who consistently found better learning results for students with learner-controlled multimedia instructions (Mayer and Chandler 2001; Mayer et al. 2003).

In the first study by Mayer and Chandler (2001), they presented learners with an animated multimedia explanation (in terms of a cause-and-effect account) on the formation of lightning and compared it to a version that was presented in smaller segments under the control of the learners. The authors based their theoretical argument on Mayer’s theory of multimedia learning (Mayer 2001) and cognitive load theory (Sweller 1999). First, they hypothesized that presenting the animation segment by segment would reduce the cognitive load compared to a presentation of the animation as a whole, because learners would have more time to mentally organize the information of each segment separately. This would allow the learners to fully understand each component of the causal chain before moving onto the next one, thereby reducing the risk of cognitive overload and encouraging deep understanding, as measured by a transfer test. Second, the fact that each component of the cause-and-effect chain could be studied separately, allowed for a kind of progressive mental model building that would be beneficial for learning. In the actual experiments, Mayer and Chandler implemented only a very rudimentary form of learner control. After each segment, the animation stopped and participants had to click on a ‘continue’ button that appeared on the screen. This way, participants only controlled the length of the pausing time between segments, and not the order of the segments or the pace of each separate segment. In the second experiment of the study, a direct comparison was made between the learner-controlled version and the system-controlled version of the animation. The results showed better transfer performance for participants in the learner control condition, but no differences on a retention test. Also no significant differences were found on the reported mental effort. Based on these data, the authors concluded that their hypotheses were supported by the data, and recommended the use of simple learner control options in multimedia instructions similar to the one they had used in their study.

In the second study by Mayer et al. (2003), they presented students with a multimedia lesson on the workings of an electric motor. In two of their experiments (2a and 2b), they compared a system-controlled version of the program with an interactive version that was divided into smaller segments. In these experiments learners could control both the pace and order of the segments. With the same line of reasoning as in Mayer and Chandler (2001), Mayer and his colleagues predicted that more cognitive resources would become available for deeper understanding and thus better transfer scores if learners could interact with the multimedia lesson. Indeed, they did find better transfer scores in the learner-control condition on both an immediate test (experiment 2a) as well as on a delayed test (experiment 2b) and concluded that these results supported their hypothesis. Furthermore, based on both these findings and the results from the Mayer and Chandler study, the authors introduced an instructional design principle called the interactivity principle, which states that people learn better from a multimedia instruction when they are able to control both order and pace of the presentation (Mayer et al. 2003).

In sum, both studies by Mayer and his colleagues (Mayer and Chandler 2001; Mayer et al. 2003) showed that learner control had a positive effect on learning from an animated multimedia instruction. The interactivity principle thus seems justified, at least for the kind of instructions used in these experiments.

Nevertheless, some reservations can be made on the theoretical and empirical foundation of the interactivity guideline. First, in both studies by Mayer and his colleagues, part of the rationale for the positive effect of learner control was that it would lower the cognitive load of the instructions. However, Mayer and Chandler (2001) did not find a difference in mental effort scores that would have reflected this decrease in cognitive load, whereas Mayer et al. (2003) did not measure mental effort to support their claims. Also, the need for a reduction in cognitive load originated in the idea that studying multimedia instructions without control options might lead to a cognitive overload. Again, the actual mental effort scores reported in the system-controlled condition in Mayer and Chandler’s study did not indicate any such overload, with a beneath-average score of 2.59 on a scale from 1 to 7. So the explanation of the positive effect of learning control in terms of a reduction in load that prevents cognitive overload is not sufficiently supported by the data. Second, though Mayer and Chandler acknowledged that time might have been a confounding variable in their experimental set-up, they failed to report any data on the actual time-on-task in the learner-control condition, and neither did Mayer et al. in 2003. So, an alternative account of the interactivity principle in terms of increased time-on-task, which has also been hinted at by Mayer and Moreno (2003), could not be investigated. Finally, Kennedy (2004) criticized Mayer and Chandler for not including any process measurements on the actual use of the interactive options, as well as measures of cognitive and motivational variables like prior knowledge and interest in the subject that might be related to the use of the learner control options. Taken together, part of the theoretical rationale behind the interactivity principle seems unwarranted, and alternative explanations could not be tested by lack of process measures and measures to explain individual differences in interactive behavior. That makes it hard to estimate the generalizability of the interactivity principle, and to relate it to other research on learner control.

So we set up a study in which we tried to replicate the interactivity effect found by Mayer and his colleagues (Mayer and Chandler 2001; Mayer et al. 2003). In contrast to those studies however, we did include a measurement of time-on-task to check whether this would provide an alternative explanation for the benefits of learner control. Furthermore, we tested the interactivity principle in its fullest form, including control over pace and order of the learning materials (like in the Mayer et al. study from 2003). Our main hypothesis following the interactivity principle was that the introduction of learner control would benefit learning in terms of transfer. Second, we expected longer learning times in the learner control condition that might account for the better transfer scores.

To test our hypotheses we presented learners with an adapted version of the multimedia lesson on the formation of lightning that was used by Mayer and Chandler (2001), but extended the interactivity of the material by adding different learner control options. In our learner control version, participants could stop and replay the material, navigate freely between different segments of the lesson, and choose whether or not they wanted to listen to each segment of the corresponding narration.

Furthermore, to better understand what might explain the (lack of) effects of learner control, we followed Kennedy’s (2004) advice and measured a number of other variables. First, as the mental effort measure had failed to produce a significant difference in the Mayer and Chandler study (2001), we tried to capture a different aspect of the cognitive processes that would account for the interactivity principle. In line with Hasler et al. (2007), we hypothesized that participants in the learner control condition would be more cognitively involved, processing the learning materials more actively, elaborating and revising their mental models. Thus, we measured the participants’ cognitive involvement. Second, we checked the prior knowledge of the learners, to see how it related to the use of the learner control options. Third, as a measure of motivational disposition, we measured the participants’ interest in weather phenomena. Fourth, we kept track of the use of the learner control options to see if and how these were used, and to relate their use to the learning performance. In the final data analysis, we further refined our understanding of the effect of learner control by relating interest, prior knowledge and cognitive involvement to the interactive behavior to see whether any of these measures could explain some of the individual variability in the use of the control options.

Method

Participants

Fifty-two university students of communication participated in our study on a voluntary basis (17 men and 35 women; average age was 22.5 years, SD = 2.1). They were randomly assigned to 2 experimental groups (with or without learner control), with 26 students in each group.

Materials

Multimedia instruction

Mayer and Chandler (2001), in their study on the effect of learner control, presented their participants with an animated multimedia instruction on the formation of lightning. This multimedia instruction was a 140-s animation that depicted the process of lightning formation accompanied by an explanative narration. Based on the description of this animation on page 391 and 392 of Mayer and Chandler’s article, we developed an adapted version of the instruction. We took the 16 frames depicted in the article and presented these static pictures as a slideshow. To capture the dynamics of the original animation we added some extra arrows to the scenes whenever they were necessary for understanding the direction of the movements described in the narration. One advantage of presenting the multimedia lesson as a slideshow was that we could easily add various control options in the learner control version of the material. Furthermore, we used exactly the same text for the narration, although translated to Dutch and spoken by a female voice (instead of a male voice). The instruction differed from Mayer and Chandler’s material in two other aspects. First, a time bar was added that showed the passing of time for each slide. Second, a unique key word to identify each segment was inserted into the picture depicted in the slide, which was picked from the accompanying text of the narration. For example, Fig. 1 shows a slide with ‘luchtstromingen’ (=gusts of wind) as key word. Both the time bar and the key words were added to support the navigation in the learner control version of the lesson.

Fig. 1
figure 1

Example of one of the slides in the multimedia instruction (based on Mayer and Chandler 2001) without learner control options

In the version without learner control (see Fig. 1 for a slide example), each of the 16 slides was shown for 13 s, after which the slideshow automatically continued with the next slide. Each slide was accompanied by the corresponding segment of the narration. The total duration of the slideshow was 210.6 s (a bit longer than Mayer and Chandler’s original animation, mainly because of language differences).

The learner control version (see Fig. 2 for a slide example) was designed to give the learner multiple control options over the slideshow. If none of these options were used, the slideshow automatically continued to the next slide after 13 s, like in the version without learner control. However, the learner could interrupt the slideshow in three different ways. First, the learner could stop the slideshow and restart it using a ‘stop’ and ‘play’ button. Second, they could click on a button called ‘Wat gebeurt hier’ (=what happens here) to hear the corresponding segment of the narration after which the slideshow continued with the next slide. Third, they could use the menu on the left side of the screen to navigate freely between the different slides. When they had visited a slide, the color of the corresponding item in the menu changed from blue to red. To finish the slideshow the learner could either wait until the slideshow had come to the final ‘einde’ (=finish)-slide and then confirm that they wanted to end the show, or push on the ‘einde’ (=finish) button in the menu and then confirm that they wanted to end the show.

Fig. 2
figure 2

Example of one of the slides in the multimedia instruction (based on Mayer and Chandler 2001) with learner control options

(Estimate of) prior knowledge

The prior knowledge test was identical to the retention test used by Mayer and Chandler (2001). It was a paper-and-pencil test that consisted of the following open question: “Please write down an explanation of how lightning works”. A maximum of 6 min was allotted to write down the answer. To relate prior knowledge to the use of the learner control options, we not only measured participants’ actual level of prior knowledge, but also their perceived level of prior knowledge, because that is what participants would be more likely to act upon during the multimedia instruction. So, participants had to estimate the correctness of their response to the prior knowledge test on a 10-point scale, ranging from 0% correct to 100% correct.

Retention

The retention test was identical to the prior knowledge test. The answers to both retention and prior knowledge test were scored on the presence of eight main steps in lightning formation as identified by Mayer in his book on multimedia learning (Mayer 2001, p. 26). Two independent judges scored the answering sheets with an inter-rater correlation of .84. The prior knowledge and retention score were calculated by taking the average score of the two raters (ranging from 0 to 8).

Transfer

The transfer test questions were also largely based on the test used by Mayer and Chandler (2001). The paper-and-pencil test consisted of three open questions, like “Suppose you see clouds in the sky, but no lightning. Why not? Give three possible arguments”. Participants got 2.5 min to answer each question. Two independent judges scored the answering sheets using the model answers described by Mayer (Mayer 2001, p. 29–30), with an inter-rater correlation of .86. The average score was taken as transfer score (ranging from 0 to 9).

Interest in weather

The interest test was a questionnaire that consisted of seven statements about the learner’s attitude towards the weather, such as “I like to talk to other people about weather phenomena”. The participants had to score each item on a 7-point Likert scale, ranging from totally disagree to totally agree. After item analysis we skipped one item that correlated quite negatively with the other items, resulting in a final scale of 6 items with an acceptable Cronbach’s alpha of .76. The interest score was calculated by taking the average score of the 6 remaining items, ranging from 1 (minimal interest) to 7 (maximal interest).

Cognitive involvement

The cognitive involvement scale was constructed based on similar scales described by Bruner, James, and Hensel (2001). It consisted of 15 semantic differentials, like (“When I was going through the lesson I was…”) concentrated-not concentrated, and interested––uninterested, that had to be scored on a 7-point scale. The internal consistency of the scale was high, with a Cronbach’s alpha of .88. The cognitive involvement score was computed by taking the average score, and ranged from 1 (minimal involvement) to 7 (maximal involvement).

Measures of Interactive Behavior

To get insight in how the learner control options were used, the activities of each participant in the learner control condition were logged. This resulted in the following process measures of interactive behavior:

  • total time-on-task (from the start of the animated multimedia instruction until the learner confirmed that he or she wanted to end the instruction).

  • number of slides visited (counting each time a slide automatically appeared after the previous one, and each time a slide was requested by the learner through the navigation menu).

  • number of narration segments requested (counting each time a learner clicked on the ‘what happens here’-button)

  • number of menu-clicks (counting each time a learner clicked on one of the options in the menu to navigate to a different slide)

  • number of stop-clicks (counting each time the learner stopped the slideshow by clicking on the ‘stop’-button)

Procedure

The experiment was carried out in individual sessions. Participants were seated behind a multimedia computer with a headphone attached to it. First they had to fill in the weather interest questionnaire. Subsequently they got 6 min to write down an answer to the prior knowledge test, after which they gave an estimate of the correctness of their answer. Then they were asked to put on the headphone and read the instructions on-screen, that told them to study the multimedia lesson and explained that they would be tested afterwards. In the learner-control condition, extra information was given on the use of the interactive features. After viewing the multimedia lesson, the cognitive involvement questionnaire was presented first, then the retention test, and finally the transfer test. When the students had finished the tests, they were kindly thanked for their participation.

Results

All differences were analyzed with t-tests and Pearson’s correlations were calculated for linear relationships. For all statistical tests, a significance level of .05 was applied.

Table 1 shows the average scores of the two experimental groups on the different measures. In both groups, the reported interest for weather phenomena was quite average (overall: M = 4.0, SD = 1.1, on a scale from 1 to 7). Moreover, prior knowledge on the subject was rather low, with hardly any of the eight steps in the formation of lightning being mentioned correctly in the prior knowledge test (overall: M = 0.4; SD = 0.5). Nevertheless, participants grossly overestimated their prior knowledge, by predicting that their answers on the prior knowledge test would be correct for about 40% (overall: M = 4.1; SD = 2.3). Although the students in the learner-control group seemed a bit more confident than the students in the no-learner-control group, the two groups did not differ significantly on either interest, prior knowledge, or estimated prior knowledge (all ps > .05).

Table 1 Group means on dependent measures (SD in brackets)

The retention scores show that participants had learned something from the multimedia lesson. The overall score had gone up from 0.4 on the prior knowledge test to a mean retention score of 4.3 (a significant increase of 3.9 points, t(51) = 24.81, p < .001). So after watching the multimedia lesson, the participants on average could recall about half the key steps in the formation of lightning. The difference between the two experimental groups on retention was not significant (t(50) = 0.79, p = .44, Cohen’s effect size d = 0.22), not even if the prior knowledge score was subtracted from the retention score (t(50) = 0.55; p = .58, d = 0.15). On the transfer test however, the difference between groups was significant in the expected direction (t(50) = 1.77, p = .04, one-sided, d = 0.50), with the participants in the learner control group having higher scores than the participants in the no control group (M = 2.7 and M = 2.1, respectively). Finally, the overall cognitive involvement score was quite high: 5.2 (SD = 0.7) on a scale from 1 to 7, but the two groups did not differ significantly (t(50) = 1.13, p = .26, d = 0.32). So in this experiment, learner control in general had a positive effect on transfer performance, but not on retention. Moreover, it did not lead to a significantly higher level of cognitive involvement with the learning materials.

But how did participants in the learner control group use the learner control options? Table 2 shows the results on the measures of interactive behavior. When looking at time-on-task, the results show that all participants in the learner control condition had spent more time on the multimedia lesson than the 210.6 s in the condition without learner control (t(25) = 6.22, p < .001). The variance was large, with some participants spending more than 10 min on the task, which is three times longer than the fastest participants. During the task, the number of slides visited varied just as much, from 16 to 48. Closer inspection of the log files revealed that all participants had watched each slide at least once, so no one had missed a slide. Only 3 participants had missed some of the narration segments, so they had not heard the entire narration. The other 23 participants had listened to each of the 16 segments at least once (with a maximum of 67 segments). On average, 22.5 s were spent on each slide (instead of 13 s in the condition without control), during which participants listened to the accompanying narration segment for 1.8 times.

Table 2 Mean, standard deviation, minimum and maximum scores on measures of interactive behavior in the learner control group (n = 26)

The navigation menu had been used with a mean number of menu clicks of 16.3. Three of the participants had not used the navigation menu at all, and had just watched the 16 slides as they were presented in their original order. The other 23 participants had used the navigation menu, but mainly to go backward in the slideshow and re-inspect a previous slide. When navigating forward, no one ever skipped a slide that had not been watched yet. So everyone stuck to the original order of the slideshow. Finally, the stop button had been hardly used. Only 8 people had used the stop button at least once, with a maximum of 4 times.

The final question was how the measures of interactive behavior related to the other measures (see Table 3 for correlations). Of the variables that were measured before the multimedia instruction, only the prior knowledge score had a significant positive correlation with the time-on-task (r = .40, p = .04), the number of slides (r = .39, p = .04), and the number of narration segments requested (r = .42, p = .03). This indicates that students who had more prior knowledge also took more time to study the multimedia lesson. But neither the amount of interest for weather phenomena, nor the estimated amount of prior knowledge was significantly related to any of the interactive behavior measures. Of the variables that were measured after the animation, only retention score was significantly related to the interactive behavior measures. Because these correlations might partly be explained by the significant correlations between prior knowledge and interactive behavior, we controlled for the effect of prior knowledge by subtracting the prior knowledge score from the retention score. This way we only looked at the actual knowledge increase, which still had a significant correlation of .47 (p = .02) with number of slides viewed, a significant correlation of .46 (p = .02) with number of menu clicks, and non-significant correlations of .27 with total time-on-task and .29 with number of narration segments. So in sum, participants in the learner control condition, who visited relatively more slides and used the navigation menu more often, also learned more from the multimedia lesson in terms of retention of the main concepts. No significant correlations however, were found between the interactive behavior measures and transfer performance, or between interactive behavior and cognitive involvement.

Table 3 Correlations between dependent measures and measures of interactive behavior in the learner control group (n = 26)

Discussion

The first aim of our experiment was to replicate the interactivity principle, which has successfully been accomplished. Although our multimedia instruction is less dynamic than the original instruction that Mayer and Chandler (2001) used in their study, the general effect of learner control is the same: higher scores on transfer but not on retention. Moreover, we have shown that the improvement in learning can be generalized to a situation in which learners have multiple interactive features at their disposal.

But how can the superiority of the learner control version best be explained? Because we logged the interactive behavior of the participants, we know that the introduction of learner control has led to a significant increase in time-on-task. On average, participants in the learner control condition have taken about 6 min to study the instruction, whereas their counterparts without learner control could study the instruction for only 3.5 min. Most participants have spent this 60% extra time reinspecting slides they had already visited by using the navigation menu, and listening to narration segments more than once. Within the learner control group, a positive relationship exists between the amount of interactive behavior and the number of steps in the formation of lightning that could be recalled at the retention test, but the group as a whole has not performed any better on retention than the group without learner control. The reverse is true for the transfer results. Within the learner control group, the extra time spent on exploring the instruction has no linear relationship with transfer performance, whereas the group as a whole has a better understanding of the formation of lightning in comparison to the group without learner control. So it seems that the availability of the learner control options has induced learners to try to better understand the multimedia lesson, but that the level of actual use of the learner control options is only related to the retention of the main concepts.

The second aim of this study was to find out how the actual interactive behavior of participants is related to the variables that are hypothesized to have their own influence in the interactive process. First, within the learner control group, learners who score higher on prior knowledge generally spend more time watching the slides and listening to the narration segments. Nevertheless, this does not seem to reflect a very intentional strategy, as the subjective estimates of prior knowledge are unrelated to the amount of interactive behavior displayed. Neither are the participants with a higher interest in learning more inclined to use the learner control options or spend more time. Finally, those within the learner control group who use the control options more intensively do not report a higher cognitive involvement. So in this study, the only variable that explains part of the individual variance in use of learner control options is prior knowledge. The relationship seems a bit counterintuitive however, as the learners with higher levels of prior knowledge use the learner control options to spend more time instead of less time on the instructions.

So in sum, it seems that adding learner control to an animated instruction can indeed increase understanding, but that a price has to be paid in terms of time-on-task. One might argue that this is high a price to pay for just a small performance gain. Moreover, large individual differences exist in how the learner control options are used, which are unrelated to the level of understanding. Also, the availability of learner control options does not seem to have increased cognitive involvement with the materials. Thus, it is yet unclear in what way the extra time pays off in terms of better understanding. To find out whether it is just a matter of ‘more time to think it over’, an alternative research strategy would be to give learners more time-on-task without giving them control over the pace and order of information, for example by generally slowing down the presentation rate or including longer pauses between segments. Equal improvement in transfer performance might be obtained, without the extra cognitive costs of having to decide what to study in what order and for how long.

On the other hand, some caution must be applied when trying to generalize our findings. First of all, the multimedia instruction used in our study (based on Mayer and Chandler 2001) was a short procedural description of a linear process. One might argue that learner control will not be very helpful in such a situation, especially because not much might be gained from studying the materials in a non-linear way. In fact, Moreno and Valdez (2005) showed that presenting the slides from the lightning instruction in a non-linear order hindered learning rather than that it helped learning. In contrast, learners who study animations about a process like the workings of a machine that does not have a very stringent linear order in the causal mechanisms might be helped a lot with navigational tools that enable them to freely explore the content. A second precaution to keep in mind is that the navigational tools in the learner control version of our study might not have been very helpful for learners in figuring out where to go. The menu labels only contained information on the relative position of a slide (‘slide 5’), but nothing about its contents. The reason why we left out any content information from the navigation menu was that it otherwise might have served as a kind of advance organizer of the content of the instructions. In that case the learner control effect would have been confounded. But in practice, one can easily imagine that navigating would have been a much easier and more efficient job if learners could have read what information to expect from each slide. A final point to be made is that our study made only a rather coarse comparison between animated instructions without learner control options and with multiple learner control options, and that a much more fine-grained study is needed to find out which mix of learner control options really supports learning with our specific materials.

Nevertheless, when designing an instructive animation, the trade-off between giving learners control over a multimedia instruction and the subsequent increase in time-on-task needs to be considered very seriously. Looking more closely at the way learners use interactive features, it can be seen that although large individual differences exist, it is yet unclear where these differences originate. Possible variables to take into account in future research are the metacognitive strategies learners apply when navigating through instructional material. Also more research is needed that further clarifies the relationship between learner characteristics and use of learner control. Only that way a better insight is gained into the benefits and pitfalls of introducing learner control in animated multimedia instructions.