Facial Emotion Detection System (Inflators)

This model is a Convolutional Neural Network (CNN) trained to detect facial emotions from images. It classifies faces into one of 6 categories: Happiness, Sadness, Anger, Surprise, Fear, and Neutral.

This project was developed by the Inflators group for the ICT3212 - Introduction to Intelligent Systems module at Rajarata University of Sri Lanka.

Model Description

  • Model Type: Convolutional Neural Network (CNN)
  • Framework: TensorFlow / Keras
  • Input: 48x48 pixel grayscale images
  • Output: 6 emotion classes (Softmax probability distribution)
  • Training Dataset: FER-2013

Intended Use

This model is designed for:

  • Real-time facial emotion detection via webcam.
  • Analyzing video streams for emotional content.
  • psychological research or customer satisfaction monitoring (experimental use).

It is not intended for high-stakes decision-making or surveillance without human oversight.

Performance

The model aims to achieve an accuracy of >60% on the FER-2013 test set, which is a competitive baseline for this difficult dataset given the 6-class problem (random guessing would be ~16%).

Limitations

  • Occlusions: Performance may degrade if the face is partially covered (masks, glasses, hands).
  • Lighting: Extreme lighting conditions (too dark/too bright) can affect detection accuracy.
  • Head Pose: The model works best on frontal faces; significant head rotation may lead to misclassification.
  • Data Bias: The FER-2013 dataset has class imbalances and some incorrectly labeled images, which may bias the model's predictions.

How to Use

1. Installation

Ensure you have the necessary libraries installed:

pip install tensorflow numpy opencv-python

2. Loading the Model

import tensorflow as tf
import cv2
import numpy as np

# Load the model
model = tf.keras.models.load_model('emotion_model.keras')

# Define emotion labels
emotion_labels = {0: 'Anger', 1: 'Fear', 2: 'Happy', 3: 'Sad', 4: 'Surprise', 5: 'Neutral'}

3. Inference on an Image

def predict_emotion(image_path):
    # Load and preprocess image
    img = cv2.imread(image_path)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)

    for (x, y, w, h) in faces:
        roi_gray = gray[y:y+h, x:x+w]
        roi_gray = cv2.resize(roi_gray, (48, 48))
        roi_gray = roi_gray.astype('float32') / 255.0
        roi_gray = np.expand_dims(roi_gray, axis=0)
        roi_gray = np.expand_dims(roi_gray, axis=-1)

        prediction = model.predict(roi_gray)
        label = emotion_labels[np.argmax(prediction)]
        print(f"Predicted Emotion: {label}")

predict_emotion('test_image.jpg')

Team Members (Inflators)

  • DTPD Wickramasinghe (Group Leader)
  • DVTR Vitharana
  • RSR Ranathunga
  • DDSS Kumasaru
  • SHD Mihidumpita
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support