Skip to content

This project aims to implement emotion recognition using convolutional neural networks (CNNs), inspired by the architectures and methodologies described in "A New Method for Face Recognition Using Convolutional Neural Network" and "Facial Expression Recognition in the Wild via Deep Attentive Networks".

License

Notifications You must be signed in to change notification settings

jdl00/emotion_recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Emotion Recognition Using Convolutional Neural Networks

forthebadge made-with-python

Description

This project aims to implement emotion recognition using convolutional neural networks (CNNs), inspired by the architectures and methodologies described in "Facial Emotion Recognition: State of the Art Performance on FER2013" and "Facial Expression Recognition in the Wild via Deep Attentive Networks". The project utilizes TensorFlow and Keras for model development, with a focus on achieving high accuracy in detecting human emotions from facial images.

Features

  • Utilizes a custom-built CNN architecture for emotion recognition.
  • Employs Haar cascades for effective face detection in preprocessing.
  • Implements data augmentation to enhance model robustness.
  • Split data into training and validation sets for effective model evaluation.
  • Can be extended to real-time emotion recognition with minimal modifications.

Model Architecture

The model architecture is based on the principles outlined in the referenced papers, adapted for emotion recognition. Key features include:

  • Multiple convolutional layers for feature extraction.
  • MaxPooling layers for spatial data reduction.
  • Dropout layers to mitigate overfitting.
  • Dense layers for classification, with a softmax activation function in the output layer to handle multiple emotion classes.

Training Dataset

The dataset used for training is the FER-2013. It contains images which are categorised to one of 7 categories being 0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral. The features of the dataset:

  • Training set of 28,709 examples in the training set, 3,589 in the testing set
  • Faces are already centered and occupies roughly the same amount of space

Installation

To set up the project, follow these steps:

  1. Clone the repository
    git clone https://github.com/jdl00/emotion_recognition

  2. Install the required libraries
    pip install -r requirements.txt

Usage

  1. To train the model run the train command

    # Environment variables
    
    # Folder to export the model to
    EXPORT MODEL_EXPORT_PATH='folder_to_export_model_run_to'
    
    # Dataset path to be used (classifiers are pulled from the dataset)
    EXPORT DATASET='dataset_path'
    
    # By default models are quantised and optimised, to disable this you can export
    EXPORT OPTIMISE=0 (to turn off optimisation)
    
    # Set the amount of epochs to train for
    EXPORT EPOCHS=200
    
    # Patience till training early stops
    EXPORT patience=5
    
    # Finally execute the train script
    'ENV_VARIABLES' python train.py
    
  2. Perform recognition on camera input

    The realtime input to the model uses Pythons OpenCV library. You can specify which input by using the device_id

    # Environment variables:
    
    # Device ID of the camera
    EXPORT DEVICE_ID='device_id_of_camera'
    
    # Path to the exported model
    EXPORT MODEL_PATH='exported_model_path'
    
    # Finally execute the recognition script
    'ENV_VARIABLES' python recognition.py
    

Results

TODO: Complete this section.

Contributing

Contributions to this project are welcome. To contribute:

  1. Fork the repository.
  2. Create a new branch for your feature (git checkout -b feature/your_feature).
  3. Commit your changes (git commit -m 'Add some feature').
  4. Push to the branch (git push origin feature/your_feature).
  5. Create a new Pull Request.

Note: Adjust these instructions based on how you wish others to contribute to your project.

License

MIT License

About

This project aims to implement emotion recognition using convolutional neural networks (CNNs), inspired by the architectures and methodologies described in "A New Method for Face Recognition Using Convolutional Neural Network" and "Facial Expression Recognition in the Wild via Deep Attentive Networks".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages