Research suggests that learners will likely spend a substantial amount of time looking at the model’s face when it is visible in a video-based modeling example.
Consequently, in this study we hypothesized that learners might not attend timely to the task areas the model is referring to, unless their attention is guided to such areas by the model’s gaze or gestures.
Results showed that the students in all conditions looked more at the female model than at the task area she referred to. However, the data did show a gradual decline in the difference between attention toward the model and the task as a function of cueing: students who observed the model gazing and gesturing at the task, looked the least at the model and the most at the task area she referred to, while those who observed the model looking straight into the camera, looked most at the model and least at the task area she referred to. Students who observed a human model only gazing at the task fell in between.
In conclusion, gesture cues in combination with gaze cues effectively help to distribute attention between the model and the task display in our video-based modeling example.

Additional Metadata
Keywords Gestures, Video-based human modeling, Eye tracking, Split attention, Cognitive load
Persistent URL hdl.handle.net/1765/110632
Journal Educational Technology & Society (online)
Citation
Ouwehand, K.H.R, van Gog, T.A.J.M, & Paas, G.W.C. (2015). Designing effective video-based modeling examples using gaze and gesture cues. Educational Technology & Society (online), 18, 78–88. Retrieved from http://hdl.handle.net/1765/110632