• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Li, W. (Li, W..) [1] | Chen, Y. (Chen, Y..) [2] | Xu, J. (Xu, J..) [3] | Zhong, J. (Zhong, J..) [4] | Dong, C. (Dong, C..) [5]

Indexed by:

Scopus

Abstract:

The multi-turn response selection is an important component in retrieval-based human–computer dialogue systems. Most recent models adopt the utilization of pre-trained language models to acquire fine-grained semantic information within diverse dialogue contexts, thereby enhancing the precision of response selection. However, effectively leveraging the language style information of speakers along with the topic information in the dialogue context to enhance the semantic understanding capability of pre-trained language models still poses a significant challenge that requires resolution. To address this challenge, we propose a BERT-based Language Style and Topic Aware (BERT-LSTA) model for multi-turn response selection. BERT-LSTA augments BERT with two distinctive modules: the Language Style Aware (LSA) module and the Question-oriented Topic Window Selection (QTWS) module. The LSA module introduces a contrastive learning method to learn the latent language style information from distinct speakers in the dialogue. The QTWS module proposes a topic window segmentation algorithm to segment the dialogue context into topic windows, which facilitates the capacity of BERT-LSTA to refine and incorporate relevant topic information for response selection. Experimental results on two public benchmark datasets demonstrate that BERT-LSTA outperforms all state-of-the-art baseline models across various metrics. Furthermore, ablation studies reveal that the LSA module significantly improves performance by capturing speaker-specific language styles, while the QTWS module enhances topic relevance by filtering irrelevant contextual information. © 2025

Keyword:

BERT Contrastive learning Language style Multi-turn response selection Topic window segmentation

Community:

  • [ 1 ] [Li W.]College of Computer and Data Science, Fuzhou University, Fujian Province, Fuzhou, 350108, China
  • [ 2 ] [Li W.]Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fujian Province, Fuzhou, 350108, China
  • [ 3 ] [Chen Y.]College of Computer and Data Science, Fuzhou University, Fujian Province, Fuzhou, 350108, China
  • [ 4 ] [Chen Y.]Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fujian Province, Fuzhou, 350108, China
  • [ 5 ] [Xu J.]College of Computer and Data Science, Fuzhou University, Fujian Province, Fuzhou, 350108, China
  • [ 6 ] [Xu J.]Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fujian Province, Fuzhou, 350108, China
  • [ 7 ] [Zhong J.]College of Computer and Data Science, Fuzhou University, Fujian Province, Fuzhou, 350108, China
  • [ 8 ] [Zhong J.]Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fujian Province, Fuzhou, 350108, China
  • [ 9 ] [Dong C.]College of Computer and Data Science, Fuzhou University, Fujian Province, Fuzhou, 350108, China
  • [ 10 ] [Dong C.]Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fujian Province, Fuzhou, 350108, China

Reprint 's Address:

Email:

Show more details

Related Keywords:

Source :

Computer Speech and Language

ISSN: 0885-2308

Year: 2026

Volume: 95

3 . 1 0 0

JCR@2023

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 1

Affiliated Colleges:

Online/Total:399/10365365
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1