Interview data is multimodal data: it consists of speech sound, facial expression and gestures, captured in a particular situation, and containing textual information and emotion. This workshop shows how a multidisciplinary approach may exploit the full potential of interview data. The workshop first gives a systematic overview of the research fields working with interview data. It then presents the speech technology currently available to support transcribing and annotating interview data, such as automatic speech recognition, speaker diarization, and emotion detection. Finally, scholars who work with interview data and tools may present their work and discover how to make use of existing technology.

annotation, emotion detection, interview data, nlp, speech processing, transcription
dx.doi.org/10.1145/3382507.3420054, hdl.handle.net/1765/131902
22nd ACM International Conference on Multimodal Interaction, ICMI 2020
Erasmus University Rotterdam

Hessen, A.V. (Arjan Van), Calamai, S. (Silvia), Heuvel, H.V.D. (Henk Van Den), Scagliola, S.I, Karrouche, N, Beeken, J. (Jeannine), … Draxler, C. (Christoph). (2020). Speech, Voice, Text, and Meaning: A Multidisciplinary Approach to Interview Data through the use of digital tools. In ICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 886–887). doi:10.1145/3382507.3420054