Abstract:
In the recent years, implementation of brain-computer interface (BCI) for implicit emotion tagging of multimedia contents has become a very popular area of research. In this thesis, a statistical modeling based emotion recognition method is proposed by utilizing EMD-DWT transformed Electroencephalogram (EEG) signals responsive to music videos. By applying Empirical Mode Decomposition (EMD), a set of dominant Intrinsic Mode Functions (IMFs) are selected on which a Discrete Wavelet Transform (DWT) is performed. Then various statistical models are fitted to the transformed DWT coefficients to find out the appropriate models which describe the coefficients well based on Bayesian Information Criterion. The parameters of the selected models are used to form the feature vector. Furthermore, an efficient feature selection method is formulated using ReliefF algorithm to avoid redundant features obtained from the feature extraction scheme. The reduced feature set thus obtained is then fed to a Support Vector Machine (SVM) to perform two class classification of different dimensions describing emotions. Extensive simulations are carried out to test the efficacy of the proposed method using DEAP, an affective computing database. It is found that the proposed method outperforms some state-of-the-art methods in terms of accuracy and F1- score.