Robust EEG-Based Emotion Recognition using CNN: A High-Accuracy Approach with Differential Entropy Features and Spatial-Frequency Domain Analysis on the SEED Dataset
DOI:
https://doi.org/10.3329/jsr.v17i3.78907Abstract
The area of Human Emotion Recognition using EEG signals is rapidly evolving its dimensions at a more excellent pace and with time, it has become an important area of research for affective computing in the field of neuroscience. Neuro-computing has also shown its potential applications in the domain of mental health monitoring, brain-computer interface, and adaptive learning systems. The deep learning models have shown significant progress in producing effective results when implemented in analyzing different EEG signals. In this study, the efficiency of Convolutional Neural Network (CNN) models for emotion categorization is investigated on an EEG-based SEED dataset. Differential Entropy (DE) characteristics derived from five important EEG rhythms—delta, theta, alpha, beta, and gamma—are used as inputs to CNN classifiers. To enhance the performance, the model uses a two-dimensional (2D) tensor representation of the input, which allows the network to learn and use spatial correlations between different EEG channels. Experimental results show that the proposed CNN-based strategy outperforms previous methods with an average accuracy of 94.09 %. These findings highlight the potential of CNNs in developing robust and scalable solutions for EEG-based emotion recognition, providing a path for more intuitive and adaptive systems in future applications.
Downloads
10
7
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
© Journal of Scientific Research
Articles published in the "Journal of Scientific Research" are Open Access articles under a Creative Commons Attribution-ShareAlike 4.0 International license (CC BY-SA 4.0). This license permits use, distribution and reproduction in any medium, provided the original work is properly cited and initial publication in this journal. In addition to that, users must provide a link to the license, indicate if changes are made and distribute using the same license as original if the original content has been remixed, transformed or built upon.