Learning analytics seek to enhance the learning processes through systematic measurements of learning related data and to provide informative feedback to learners and educators. In this follow-up study of previous research (Tempelaar, Rienties, and Giesbers, 2015), we focus on the issues of stability and sensitivity of Learning Analytics (LA) based prediction models. Do predictions models stay intact, when the instructional context is repeated in a new cohort of students, and do predictions models indeed change, when relevant aspects of the instructional context are adapted? This empirical contribution provides an application of Buckingham Shum and Deakin Crick's theoretical framework of dispositional learning analytics: an infrastructure that combines learning dispositions data with data extracted from computerassisted, formative assessments and LMSs. We compare two cohorts of a large introductory quantitative methods module, with 1005 students in the '13/'14 cohort, and 1006 students in the '14/'15 cohort. Both modules were based on principles of blended learning, combining face-to-face Problem-Based Learning sessions with e-tutorials, and have similar instructional design, except for an intervention into the design of quizzes administered in the module. Focusing on the predictive power, we provide evidence of both stability and sensitivity of regression type prediction models.

, , , ,
hdl.handle.net/1765/86057

Tempelaar, D. T., Rienties, B., & Giesbers, B. (2015). Stability and sensitivity of learning analytics based prediction models. Retrieved from http://hdl.handle.net/1765/86057