Example codes for some of my PhD work on recognizing dimensional emotions, including a little demo
These are mostly unimodal emotion recognition models. For multimodal emotion recognition with different modality fusion strategies see ACL2018-MultimodalMultitaskSentimentAnalysis
Disfluency and Non-verbal Vocalisation (DIS-NV) features for emotion recognition in spoken dialogue:
@inproceedings{moore2014word,
title={Word-Level Emotion Recognition Using High-Level Features},
author={Moore, Johanna and Tian, Leimin and Lai, Catherine},
booktitle={Computational Linguistics and Intelligent Text Processing},
pages={17--31},
year={2014},
publisher={Springer}
}
Differences between spontaneous and acted dialogue:
@inproceedings{tian2015emotion,
title={Emotion recognition in spontaneous and acted dialogues},
author={Tian, Leimin and Moore, Johanna D and Lai, Catherine},
booktitle={Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction},
pages={698--704},
year={2015},
organization={IEEE}
}
Application to predicting movie-induced emotions:
@inproceedings{tian2017recognizing,
title={Recognizing induced emotions of movie audiences: Are induced and perceived emotions the same?},
author={Tian, Leimin and Muszynski, Michal and Lai, Catherine and Moore, Johanna D and Kostoulas, Theodoros and Lombardo, Patrizia and Pun, Thierry and Chanel, Guillaume},
booktitle={Proceedings of 7th International Conference on Affective Computing and Intelligent Interaction},
year={2017},
pages={28--35},
publisher={IEEE},
address={San Antonio, Texas, USA}
}