This project focuses on detecting emotions from voice recordings across multiple languages using machine learning and deep learning techniques.
The goal is to classify emotions such as happy, sad, angry, neutral, etc., from audio speech signals, leveraging multilingual datasets and applying feature extraction techniques and neural networks.
multilingual_emotion_detection_in_voice.ipynb
data/ # Directory for audio datasets
models/ # Saved model files (optional)
README.md
requirements.txt # Python dependencies (optional)
pip install -r requirements.txt
jupyter notebook multilingual_emotion_detection_in_voice.ipynb
This project can be used with publicly available datasets such as:
Make sure to adjust preprocessing steps if your dataset differs.
Models are trained using extracted MFCCs and other relevant features from audio. Performance is evaluated using accuracy, confusion matrix, and other classification metrics.
Author: Ashika M Email: ashikasjcetcse@gmail.com ✉️ GitHub: https://github.com/ashika67 👨💻