In this empirical contribution, a follow-up study of previous research [1], we focus on the issues of stability and sensitivity of Learning Analytics based prediction models. Do predictions models stay intact, when the instructional context is repeated in a new cohort of students, and do predictions models indeed change, when relevant aspects of the instructional context are adapted? Applying Buckingham Shum and Deakin Crick’s theoretical framework of dispositional learning analytics combined with formative assessments and learning management systems, we compare two cohorts of a large module introducing mathematics and statistics. Both modules were based on principles of blended learning, combining face-to-face Problem-Based Learning sessions with e-tutorials, and have similar instructional design, except for an intervention into the design of quizzes administered in the module. We analyse bivariate and multivariate relationships of module performance and track and disposition data to provide evidence of both stability and sensitivity of prediction models.

, , , ,
doi.org/10.1007/978-3-319-29585-5_15, hdl.handle.net/1765/83421

Tempelaar, D. T., Rienties, B., & Giesbers, B. (2016). Verifying the stability and sensitivity of learning analytics based prediction models: An extended case study. doi:10.1007/978-3-319-29585-5_15