• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Chen, Zelin (Chen, Zelin.) [1] | Liu, Lvmin (Liu, Lvmin.) [2] | Wan, Yujie (Wan, Yujie.) [3] | Chen, Yuzhong (Chen, Yuzhong.) [4] (Scholars:陈羽中) | Dong, Chen (Dong, Chen.) [5] | Li, Weiwei (Li, Weiwei.) [6] | Lin, Yuhang (Lin, Yuhang.) [7]

Indexed by:

EI Scopus SCIE

Abstract:

Multi-turn response selection is an important branch in the field of natural language processing, which aims to select the most appropriate response based on multi-turn dialogue. Most state-of-the-art models adopt pre-trained language models (PrLMs) and multiple auxiliary tasks to enhance their ability to understand the semantics in multi-turn dialogue. However, some critical challenges still remain to be addressed. Optimizing multiple auxiliary tasks simultaneously may significantly increase the training cost. Meanwhile, the semantic gap between the optimization objectives of the main and auxiliary tasks may bring noise to pre-trained language models. To address these challenges, we propose an efficient BERT-based neural network model with local context comprehension (BERT-LCC) for multi-turn response selection. First, we propose a self -supervised learning strategy, which introduces an auxiliary task named Response Prediction in Random Sliding Windows (RPRSW). In a multi-turn dialogue, the RPRSW task takes utterances falling within a random sliding window as input and predicts whether the last utterance within the sliding window is the appropriate response for the local dialogue context. This auxiliary task can enhance BERT's understanding of local semantic information. Second, we propose a local information fusion (LIF) mechanism that collects multi-granularity local features at different dialogue stages and employs a gating function to fuse global features with local features. Third, we introduce a simple but effective domain learning strategy to learn rich semantic information at different dialogue stages during pre-training. Experimental results on two public benchmark datasets show that BERT-LCC outperforms other state-of-the-art models.

Keyword:

BERT Local context Multi-task learning Multi-turn dialogue Response selection

Community:

  • [ 1 ] [Chen, Yuzhong]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350108, Fujian, Peoples R China
  • [ 2 ] [Chen, Yuzhong]Fujian Prov Key Lab Network Comp & Intelligent Inf, Fuzhou 350108, Fujian, Peoples R China

Reprint 's Address:

Show more details

Related Keywords:

Source :

COMPUTER SPEECH AND LANGUAGE

ISSN: 0885-2308

Year: 2023

Volume: 82

3 . 1

JCR@2023

3 . 1 0 0

JCR@2023

ESI Discipline: COMPUTER SCIENCE;

ESI HC Threshold:32

JCR Journal Grade:2

CAS Journal Grade:3

Cited Count:

WoS CC Cited Count: 3

SCOPUS Cited Count: 6

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 4

Online/Total:736/9773349
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1