Impact of temporal window on the performance of collaboration prediction models

Published

March 15, 2023

This talk presented a research paper at the 14th International Conference of Learning Analytics & Knowledge organized at Arizona State University, Arizona. The paper included results from a study investigating the impact of using different temporal window sizes for feature aggregation on the performance of collaboration prediction models for authentic classroom settings.

You can read more about the presented work here: Paper.

Presentation slides: Slides

Abstract

Multimodal Learning Analytics (MMLA) has been applied to col- laborative learning, often to estimate collaboration quality with the use of multimodal data, which often have uneven time scales. The difference in time scales is usually handled by dividing and aggregating data using a fixed-size time window. So far, the cur- rent MMLA research lacks a systematic exploration of whether and how much window size affects the generalizability of collaboration quality estimation models. In this paper, we investigate the impact of different window sizes (e.g., 30 seconds, 60s, 90s, 120s, 180s, 240s) on the generalizability of classification models for collaboration quality and its underlying dimensions (e.g., argumentation). Our results from an MMLA study involving the use of audio and log data showed that a 60 seconds window size enabled the development of more generalizable models for collaboration quality (AUC 61%) and argumentation (AUC 64%). In contrast, for modeling dimensions focusing on coordination, interpersonal relationship, and joint in- formation processing, a window size of 180 seconds led to better performance in terms of across-context generalizability (on average from 56% AUC to 63% AUC). These findings have implications for the eventual application of MMLA in authentic practice.

Back to top