Decoding dynamic affective responses to naturalistic videos with shared neural patterns
This study explored the feasibility of using shared neural patterns from brief affective episodes (viewing affective pictures) to decode extended, dynamic affective sequences in a naturalistic experience (watching movie-trailers). Twenty-eight participants viewed pictures from the International Affective Picture System (IAPS) and, in a separate session, watched various movie-trailers. We first located voxels at bilateral occipital cortex (LOC) responsive to affective picture categories by GLM analysis, then performed between-subject hyperalignment on the LOC voxels based on their responses during movie-trailer watching. After hyperalignment, we trained between-subject machine learning classifiers on the affective pictures, and used the classifiers to decode affective states of an out-of-sample participant both during picture viewing and during movie-trailer watching. Within participants, neural classifiers identified valence and arousal categories of pictures, and tracked self-reported valence and arousal during video watching. In aggregate, neural classifiers produced valence and arousal time series that tracked the dynamic ratings of the movie-trailers obtained from a separate sample. Our findings provide further support for the possibility of using pre-trained neural representations to decode dynamic affective responses during a naturalistic experience.
|Persistent URL||dx.doi.org/10.1016/j.neuroimage.2020.116618, hdl.handle.net/1765/124865|
|Series||VSNU Open Access deal|
|Note||Reference: YNIMG 116618|
Chan, H.Y, Smidts, A, Schoots, V.C, Sanfey, A.G, & Boksem, M.A.S. (2020). Decoding dynamic affective responses to naturalistic videos with shared neural patterns. NeuroImage. doi:10.1016/j.neuroimage.2020.116618