Codes for UniMF: A Unified Multimodal Framework for Multimodal Sentiment Analysis in Missing Modalities and Unaligned Multimodal Sequences (Accepted by IEEE Transactions on Multimedia).
git clone https://github.com/gw-zhong/UniMF.git
-
CMU-MOSI & CMU-MOSEI (Glove) [align & unaligned] (which are not available now)
-
MELD (Sentiment/Emotion) [align] (only Glove)
-
UR-FUNNY (V1 & V2) [align] (only Glove)
Alternatively, you can download these datasets from:
- BaiduYun Disk
code: zpqk
For convenience, we also provide the BERT pre-training model that we fine-tuned with:
- BaiduYun Disk
code: e7mw
First, install the required packages for your virtual environment:
pip install -r requirements.txt
Then, create (empty) folders for data, results, and pre-trained models:
cd UniMF
mkdir data results pre_trained_models
and put the downloaded data in 'data/'.
To make it easier to run the code, we have provided scripts for each dataset:
- input_modalities: The input modality, which can be any of
LAV
,LA
,LV
,AV
,L
,A
,V
. - experiment_id: The id of the experiment, which can be set to an arbitrary integer number.
- number_of_trials: Number of trials for hyperparameter optimization.
- subdataset_name: Only MELD exists, set to
meld_senti
ormeld_emo
for MELD (Sentiment) or MELD (Emotion) respectively.
Note: If you want to run bert mode, add --use_bert
and change the dataset name to mosi-bert
or mosei-bert
.
bash scripts/mosi.sh [input_mdalities] [experiment_id] [number_of_trials]
bash scripts/mosei.sh [input_mdalities] [experiment_id] [number_of_trials]
bash scripts/meld.sh [input_mdalities] [experiment_id] [subdataset_name] [number_of_trials]
bash scripts/urfunny.sh [input_mdalities] [experiment_id] [number_of_trials]
Or, you can run the code as normal:
python main.py --[FLAGS]
Please cite our paper if you find that useful for your research:
@article{huan2023unimf,
title={UniMF: A Unified Multimodal Framework for Multimodal Sentiment Analysis in Missing Modalities and Unaligned Multimodal Sequences},
author={Huan, Ruohong and Zhong, Guowei and Chen, Peng and Liang, Ronghua},
journal={IEEE Transactions on Multimedia},
year={2023},
publisher={IEEE}
}
If you have any question, feel free to contact me through [email protected].