Abstract:Aiming at the shortcomings of low-quality face recognition algorithm based on unified feature space, such as poor robustness to low-quality faces and limited feature representation capability, a low-quality face image recognition algorithm based on knowledge distillation is proposed. First, the ResNeXt network is used as the backbone feature extraction network, and the two-channel attention module is introduced to construct a teacher-student knowledge distillation framework with an attention mechanism. Secondly, the output features of the teacher network are adopted as labeled knowledge, and the effective recognition features are passed to the student network. And the attention graph features are adopted as the intermediate layer knowledge to solve the lack of single knowledge information in the output layer, and the feature knowledge is enriched by combining two kinds of knowledge distillation to ensure the diversity of knowledge information in the teacher network model. Then, the weighted average of labeled knowledge distillation loss, attention graph distillation loss and recognition loss are fused as the total network loss function to ensure that the student network model has a better learning ability. Finally, tested under different quality images in AgeDB-30 and CPLFW test sets, the results of the ablation experiments show that compared to the generic face recognition model without distillation, the model with two types of knowledge distillation gains 2.25%, 11.33%, 24.64% and 2.8%, 10.58%, 27.85% improvement in recognition accuracy, respectively. Comparative experiments show that the algorithm proposed in this paper also obtains different degrees of improvement in accuracy compared to other mainstream algorithms.