Skip to content

ahmed-nady/Multimodal-Action-Recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

Multimodal-Action-Recognition

In our work entitled "EPAM-Net: An Efficient Pose-driven Attention-guided Multimodal Network for Video Action Recognition," we address the limitations of existing multimodal-based human action recognition approaches that are either computationally expensive, which limits their applicability in real-time scenarios, or fail to exploit the spatial-temporal information of multiple data modalities.

We present an efficient multimodal architecture (EPAM-Net) with a spatial temporal attention block that utilizes skeleton features to help the visual network stream focus on key frames and their salient spatial regions. The proposed architecture achieved comparative performance with state-of-the-art methods on NTU RGB-D 60 and 120 datasets with a 6.2-9.9x reduction in FLOPs and 9-9.6x reduction in the number of network parameters.

Paper: https://lnkd.in/ec-hWUzq

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published