基于知识蒸馏的低质量人脸图像识别算法
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP391.41

基金项目:

国家自然科学基金(基于深度学习的数字图像篡改检测技术研究,No.62362063);国家自然科学基金(非可控条件下维吾尔族人脸表情识别算法研,No.61866037)


Knowledge distillation based algorithm for low quality face image recognition
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对基于统一特征子空间的低质量人脸识别算法存在对低质量人脸的鲁棒性较差、特征表示能力有限等缺点,提出了一种基于知识蒸馏的低质量人脸图像识别算法。首先,将ResNeXt网络作为骨干特征提取网络,并引入双通道注意力模块构建具有注意力机制的教师-学生知识蒸馏框架。其次,采用教师网络的输出特征作为标签知识,将有效的识别特征传递给学生网络、采用注意力图特征作为中间层知识,弥补输出层知识信息单一的不足,通过结合两种知识蒸馏的方式丰富特征知识保证教师网络模型知识信息的多样性。然后,将标签知识蒸馏损失、注意力图蒸馏损失和识别损失的加权平均融合作为网络总的损失函数,确保学生网络模型具有更好的学习能力。最后,在AgeDB-30和CPLFW测试集不同质量图像下进行测试,消融实验结果表明,相比无蒸馏的通用人脸识别模型,经过两种知识蒸馏的模型,在识别准确率上分别获得了2.25%、11.33%、24.64%和2.8%、10.58%、27.85%的提升。对比实验表明,与其他主流算法相比,本文所提算法在准确率上也获得了不同程度的提升。

    Abstract:

    Aiming at the shortcomings of low-quality face recognition algorithm based on unified feature space, such as poor robustness to low-quality faces and limited feature representation capability, a low-quality face image recognition algorithm based on knowledge distillation is proposed. First, the ResNeXt network is used as the backbone feature extraction network, and the two-channel attention module is introduced to construct a teacher-student knowledge distillation framework with an attention mechanism. Secondly, the output features of the teacher network are adopted as labeled knowledge, and the effective recognition features are passed to the student network. And the attention graph features are adopted as the intermediate layer knowledge to solve the lack of single knowledge information in the output layer, and the feature knowledge is enriched by combining two kinds of knowledge distillation to ensure the diversity of knowledge information in the teacher network model. Then, the weighted average of labeled knowledge distillation loss, attention graph distillation loss and recognition loss are fused as the total network loss function to ensure that the student network model has a better learning ability. Finally, tested under different quality images in AgeDB-30 and CPLFW test sets, the results of the ablation experiments show that compared to the generic face recognition model without distillation, the model with two types of knowledge distillation gains 2.25%, 11.33%, 24.64% and 2.8%, 10.58%, 27.85% improvement in recognition accuracy, respectively. Comparative experiments show that the algorithm proposed in this paper also obtains different degrees of improvement in accuracy compared to other mainstream algorithms.

    参考文献
    相似文献
    引证文献
引用本文

英特扎尔·艾山江,伊力哈木·亚尔买买提. 基于知识蒸馏的低质量人脸图像识别算法[J]. 科学技术与工程, 2025, 25(2): 695-703.
YINGtezhaer·Aishanjiang, YIlihamu·Yaermaimaiti. Knowledge distillation based algorithm for low quality face image recognition[J]. Science Technology and Engineering,2025,25(2):695-703.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-03-18
  • 最后修改日期:2024-11-03
  • 录用日期:2024-05-21
  • 在线发布日期: 2025-01-21
  • 出版日期: