Abstract:In response to the problem of low accuracy in epilepsy detection and recognition using single-view networks, a multi-view convolutional network model with fused attention mechanism (FAM-MCNN) is proposed. FAM-MCNN extracts multi-view features from time, frequency, time-frequency and nonlinear domains to comprehensively characterise the EEG signals, uses multi-scale convolution to capture different levels of detail information, and introduces an attention mechanism to weight and fuse the features from the view dimensions and individual feature vector dimensions, respectively, so as to improve the ability to discriminate between different categories of EEG signals from epilepsy patients. The results of the comparison experiments performed on the CHB-MIT epilepsy dataset show that the average accuracy, sensitivity, and specificity of the FAM-MCNN model are improved by 14.29%, 16.13%, and 12.54%, respectively, when compared to a single-view network. In addition, experiments under a small number of training samples (25%) show that its detection performance reaches the level of the comparison model with a large number of training samples (80%-90%).