Abstract:Aiming at the problem of poor Q&A performance due to the complexity of Chinese glyphs and semantic information, a knowledge graph question answering method based on Chinese pre-trained language model ChineseBERT was proposed. The method employs a Chinese pre-trained language model, ChineseBERT, as the semantic embedding layer of the text, which incorporates both glyph and pinyin information to improve the performance of traditional semantic parsing methods on the entity mention recognition and relationship prediction subtasks. Specifically, this paper proposes an entity mention recognition model based on ChineseBERT-CRF and a relation prediction model based on ChineseBERT-TextCNN-Softmax, respectively, to comprehensively improve the semantic understanding ability of Chinese texts. Finally, the relevant information between subtasks was combined to predict the final answer. Experimental results on the educational Q&A dataset MOOC Q&A and the open domain Q&A dataset NLPCC2018 show the effectiveness of the proposed method.