A new deep learning model for EEG-based emotion recognition

A new deep learning model for EEG-based emotion recognition

Recent developments in mechanical learning have allowed the development of techniques for the detection and recognition of human emotions. Some of these techniques work with the analysis of electroencephalographic (EEG) signals, which essentially record the electrical activity of the brain collected from the facial skull.

Most EEG-based emotion classification methods introduced in the last decade employ traditional machine learning (ML) techniques, such as SVM models, as these models require less training samples and there is still a lack of large-scale EEG datasets. Recently, however, researchers have compiled and released several new data sets containing EEG brain recordings.

The release of these datasets opens up exciting new possibilities for EEG-based emotion recognition, as they could be used to develop deep learning models that perform better than traditional ML techniques. Unfortunately, however, the low resolution of the EEG signals contained in these datasets could make deep learning training courses quite difficult.

“Low-resolution problems remain an issue in EEG-based emotion classification,” said TechHewler’s Sunhee Hwang, one of the researchers who conducted the study. “We have an idea to solve this problem, which involves creating high-resolution EEG images.”

To improve the analysis of the available EEG data, Hwang and her colleagues first created the so-called “topology conservation entropy entropy” using the electrode coordinates when collecting the data. They then developed a convolutional neural network (CNN) and trained it with up-to-date data, teaching it to evaluate three general categories of emotions (ie, positive, neutral, and negative).

“Previous methods tend to ignore topology information of EEG features, but our method enhances EEG representation by learning the high-resolution EEG images produced,” Hwang said. “Our method redefines the EEG features through the proposed CNN, which allows the clustering effect to achieve better representation.”

The researchers trained and evaluated their approach to the SEED dataset, which contains 62-channel EEG signals. They found that their method could classify emotions with a remarkable average accuracy of 90.41%, outperforming other EEG-based machine learning techniques.

“If EEG signals are recorded by different emotional clips, the original DE functions cannot be concentrated,” Hwang added. “We have also applied our method of evaluating driver alertness to indicate its availability.”

In the future, the method proposed by Hwang and her colleagues could inform the development of new EEG-based emotion recognition tools as it introduces a viable solution to overcome issues related to low-resolution EEG data. The same approach could be applied to other deep learning models for the analysis of EEG data, even those designed for something other than the classification of human emotions.

“For the tasks of computer vision, large datasets have enabled the immense success of deep learning models for image classification, some of which have gone beyond human performance,” Hwang said. “Also, complex data processing is no longer necessary. In our future work, we hope to create large-scale EEG data sets using a GAN.”

An in-depth learning technique for identifying environmentally driven emotions.

Source: phys.org, themediahq.com