Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
-
Updated
Jun 16, 2021 - Python
Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.
This repo contains source code for the MultiModal Masking (M^3) Interspeech 2021 paper.
Emotiwave is a research project investigating how well AI systems can recognise human emotions from video when one or more sensors fail. The core question: if you lose the audio, or the camera, or the transcript — does the system fall apart, or does it adapt?
Multimodal emotion recognition framework for Human–Robot Interaction, featuring adaptive cross-modal fusion and emotion-level Transformer decoding with built-in interpretability.
A comprehensive implementation of multimodal emotion recognition using the Cross-Modal Adaptive Representation with Attention Transformer (CARAT) architecture, featuring variable-length temporal processing and transfer learning for continuous emotion modeling.
Mock’n-Hire redefines hiring end-to-end by ranking resumes with semantic precision and delivering real-time, emotion-aware mock interview feedback - giving recruiters bias-resistant insights and candidates targeted, actionable practice.
Add a description, image, and links to the cmu-mosei topic page so that developers can more easily learn about it.
To associate your repository with the cmu-mosei topic, visit your repo's landing page and select "manage topics."