Query:
学者姓名:叶东毅
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
问题生成是一项具有挑战性的自然语言处理任务,旨在生成具有给定答案和上下文的问题,近年来受到了广泛关注.最近,由于神经网络的发展,问题生成任务取得了较大的进展.然而,现有模型仍然存在未有效利用外部知识以及在利用图神经网络捕获隐藏结构信息未捕获语法信息等问题.针对上述问题本文提出知识增强双图交互网络KE-BGINN(Knowledge-En-hanced Bi-Graph Interaction Neural Network).首先为了有效利用外部知识信息,KE-BGINN通过知识图谱本身的图结构信息构造知识增强图,并利用图卷积网络对文本以及答案上下文语义信息进行扩充.其次,KE-BGINN引入一种双图交互机制,利用两个图卷积网络学习上下文的隐藏结构信息以及语法信息,在图间信息融合时,构造一个虚拟图来充分融合不同图之间的异构信息.最后,KE-BGINN利用指针网络解码机制来解决问题生成时罕见和未知词的问题.在SQuAD数据集上的实验结果证明,与对比模型相比较,KE-BGINN模型的综合性能更加优异.
Keyword :
双图交互 双图交互 图卷积网络 图卷积网络 知识图谱 知识图谱 虚拟图 虚拟图 问题生成 问题生成
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 李亚峰 , 叶东毅 , 陈羽中 . 用于问题生成的知识增强双图交互网络 [J]. | 小型微型计算机系统 , 2024 , 45 (5) : 1032-1038 . |
MLA | 李亚峰 等. "用于问题生成的知识增强双图交互网络" . | 小型微型计算机系统 45 . 5 (2024) : 1032-1038 . |
APA | 李亚峰 , 叶东毅 , 陈羽中 . 用于问题生成的知识增强双图交互网络 . | 小型微型计算机系统 , 2024 , 45 (5) , 1032-1038 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-round conversational recommendation (MCR), fulfilling a real-time recommendation task for users through interactively asking attributes and recommending items, can be regarded as a multi-step Markov decision-making process. Thus, in MCR, the key point is how to appropriately guide and characterize the dynamic interactive process in the conversation and to mine the dynamic relationship among the user, items and attributes to determine such policies as when to ask (attributes) and recommend (items), what (attributes) to ask and what (items) to recommend. Recent works mainly use statistical information involved in the conversation process to characterize the conversation without comprehensively considering the dynamic relationship among the user, items and attributes and may consequently have a negative impact on accurate capture of the user's real-time preferences. To address this issue, we propose a multi-round conversational recommendation method based on dynamic heterogeneous encoding called DHGECON. Firstly, a dynamic heterogeneous graph with three types of node (users, items, and attributes) is constructed to characterize the proceeding conversation. Secondly, a heterogeneous graph-based encoder which adaptively updates the attention weight of nodes is designed to mine and represent the dynamic high-order semantic relationship among the user, items and attributes. Finally, the encoded information is fed into the decision-making module to get the action (i.e., recommending items or asking attributes) for guiding the next round conversation. Experimental results show that, compared with the state-of-the-art existing methods, the proposed method has a significant improvement in terms of major evaluation metrics over four real-world datasets.(c) 2023 Elsevier B.V. All rights reserved.
Keyword :
Dynamic heterogeneous graph Dynamic heterogeneous graph Multi-round conversational recommender system Multi-round conversational recommender system Self-attention mechanism Self-attention mechanism
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yao, Huayong , Yao, Hongyu , Ye, Dongyi . DHGECON: A multi-round conversational recommendation method based on dynamic heterogeneous encoding [J]. | KNOWLEDGE-BASED SYSTEMS , 2023 , 273 . |
MLA | Yao, Huayong 等. "DHGECON: A multi-round conversational recommendation method based on dynamic heterogeneous encoding" . | KNOWLEDGE-BASED SYSTEMS 273 (2023) . |
APA | Yao, Huayong , Yao, Hongyu , Ye, Dongyi . DHGECON: A multi-round conversational recommendation method based on dynamic heterogeneous encoding . | KNOWLEDGE-BASED SYSTEMS , 2023 , 273 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
多轮对话推荐系统(CRS)以交互的方式获取用户的实时信息,相较于基于协同过滤等的传统推荐方法能够取得更好的推荐效果.然而现有的CRS存在用户偏好捕获不够准确、对话轮数要求过多以及推荐时机不恰当等问题.针对这些问题,提出一种基于深度强化学习且考虑用户多粒度反馈信息的对话推荐算法.不同于现有的CRS,所提算法在每轮对话中同时考虑用户对商品本身以及更细粒度的商品属性的反馈,然后根据收集的多粒度反馈对用户、商品和商品属性特征进行在线更新,并借助深度Q学习网络(DQN)算法分析每轮对话后的环境状态,从而帮助系统作出较为恰当合理的决策动作,使它能够在比较少的对话轮次的情况下分析用户购买商品的原因,更全面地挖掘用户的实时偏好.与对话路径推理(SCPR)算法相比,在Last.fm真实数据集上,算法的15轮推荐成功率提升了46.5%,15轮推荐轮次上缩短了0.314轮;在Yelp真实数据集上,算法保持了相同水平的推荐成功率,但在15轮推荐轮次上缩短了0.51轮.
Keyword :
偏好挖掘 偏好挖掘 反馈信息 反馈信息 多粒度 多粒度 多轮对话推荐系统 多轮对话推荐系统 深度Q学习网络 深度Q学习网络
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 姚华勇 , 叶东毅 , 陈昭炯 . 考虑多粒度反馈的多轮对话强化学习推荐算法 [J]. | 计算机应用 , 2023 , 43 (1) : 15-21 . |
MLA | 姚华勇 等. "考虑多粒度反馈的多轮对话强化学习推荐算法" . | 计算机应用 43 . 1 (2023) : 15-21 . |
APA | 姚华勇 , 叶东毅 , 陈昭炯 . 考虑多粒度反馈的多轮对话强化学习推荐算法 . | 计算机应用 , 2023 , 43 (1) , 15-21 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-view clustering has attracted increasing attention by reason of its ability to leverage the complementarity of multi-view data. Existing multi-view clustering methods have explored nonnegative matrix factorization to decompose a matrix into multiple matrices for feature representations from multi-view data, which are not discriminative enough to deal with the natural data containing complex information. Moreover, most of multi -view clustering methods prioritize the consensus information among multi-view data, leaving a large amount of information redundant and the clustering performance deterio-rated. To address these issues, this paper proposes a multi-view clustering framework that adopts a diversity loss for deep matrix factorization and reduces feature redundancy while obtaining more discriminative features. We then bridge the relation between deep auto -encoder and deep matrix factorization to optimize the objective function. This method avoids the challenges in the optimization process. Extensive experiments demonstrate that the proposed method is superior to state-of-the-art methods. (c) 2022 Elsevier Inc. All rights reserved.
Keyword :
Deep learning Deep learning Deep matrix factorization Deep matrix factorization Diversity embedding Diversity embedding Multi -view clustering Multi -view clustering
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Zexi , Lin, Pengfei , Chen, Zhaoliang et al. Diversity embedding deep matrix factorization for multi-view clustering [J]. | INFORMATION SCIENCES , 2022 , 610 : 114-125 . |
MLA | Chen, Zexi et al. "Diversity embedding deep matrix factorization for multi-view clustering" . | INFORMATION SCIENCES 610 (2022) : 114-125 . |
APA | Chen, Zexi , Lin, Pengfei , Chen, Zhaoliang , Ye, Dongyi , Wang, Shiping . Diversity embedding deep matrix factorization for multi-view clustering . | INFORMATION SCIENCES , 2022 , 610 , 114-125 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The problem of coloring flower line drawings with meticulous effect based on a reference image is addressed. Existing reference-based coloring algorithms for line drawing are difficult to learn and simulate the unique color gradient effect of meticulous flower paintings. Moreover, the reference image in these algorithms is usually required to have similar geometric layout structure to the line drawing, which limits the applicability of the algorithms. Therefore, it is difficult to directly apply existing algorithms to accomplish coloring of line drawings with meticulous effect. On the basis of conditional generative adversarial network(CGAN) framework, a coloring algorithm for flower line drawings with meticulous effect is proposed by means of semantic matching between the reference image and the line drawing. In terms of network structure design, the proposed algorithm uses U-Net as the basis of the generator and designs two additional sub-modules. One is the semantic positioning sub-module. This module pre-trains a semantic segmentation network to generate a semantic label map of the flower line drawing. The label map is encoded as an adaptive instance normalization affine parameter and then introduced into the coloring model to improve the recognition ability of different semantic regions and the accuracy of color positioning. The other is the color coding sub-module. This module extracts the color features of the reference image, and then splices to the first three decoding layers of the generator, in which way, the color information is injected into the color model. Combining this module with semantic location module, our algorithm enhances the learning and simulation of gradient color pattern. In network training stage, the algorithm does not train the model on 'original meticulous flower work-flower line drawing' data pairs. Instead, a perturbed version of the original work via such perturbation operations as disturbing the original geometric structure is generated and then 'perturbed version-flower line drawing' data pairs are used to train our model, which turns out to reduce the model's dependence on the spatial geometry layout of the original work and to then improve the applicability of the proposed algorithm. The experimental results show that the proposed algorithm has a correct response to the color semantics of the reference image selected by the user. It is also shown that the introduced structure of semantic positioning module and color coding module could improve the simulation effect of gradient colors and realize the colorization of the flower line drawing under the guidance of different reference images, as well as diversified coloring results. © 2022, Science Press. All right reserved.
Keyword :
Color Color Color image processing Color image processing Color matching Color matching Generative adversarial networks Generative adversarial networks Semantics Semantics Semantic Segmentation Semantic Segmentation Semantic Web Semantic Web
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Yuan , Chen, Zhaojiong , Ye, Dongyi . A Coloring Algorithm for Flower Line Drawings with Meticulous Effect Based on Semantic Matching of Reference Images [J]. | Computer Research and Development , 2022 , 59 (6) : 1271-1285 . |
MLA | Li, Yuan et al. "A Coloring Algorithm for Flower Line Drawings with Meticulous Effect Based on Semantic Matching of Reference Images" . | Computer Research and Development 59 . 6 (2022) : 1271-1285 . |
APA | Li, Yuan , Chen, Zhaojiong , Ye, Dongyi . A Coloring Algorithm for Flower Line Drawings with Meticulous Effect Based on Semantic Matching of Reference Images . | Computer Research and Development , 2022 , 59 (6) , 1271-1285 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
研究基于生成对抗网的中国山水画的边界外推问题.现有的图像外推方法主要是针对草地、天空等内容比较单一、纹理比较规范的自然场景进行的,直接将其应用于内容较为复杂、层次丰富、笔触变化多样的中国山水画外推会出现外推内容模糊、与原有图像边界语义不一致等现象.针对上述问题,基于生成对抗网的思想,提出一种新的生成对抗网的双向解码特征融合网络(bidirectional decoding feature fusion generative adversarial network,BDFF-GAN).网络在生成器设计方面,以现有的U型网络(U-Net)为基础,增加一个多尺度解码器,构建一种双向解码特征融合的生成器UY-Net.多尺度解码器抽取编码器不同层级的特征进行交叉互补的组合,增强了不同尺度特征之间的连接交融;同时每一层双向解码的结果还通过条件跳跃连接进一步相互融合.UY-Net设计上的这2个特点有利于网络对山水画不同粒度的语义特征和笔触形态的传递与学习.在鉴别器设计方面,采用全局鉴别器和局部鉴别器相结合的架构,全局鉴别器将整幅山水画作为输入来控制外推结果的全局一致性,局部鉴别器将原有山水画与外推山水画交界处周围的小区域作为输入以提高外推部分与原画作的连贯性和细节生成质量.实验结果表明,与其他方法相比较,所提算法较好地学习到了山水画的语义特征和纹理信息,外推结果在语义内容的连贯性和笔触纹理结构的自然性方面都有更好的表现.此外,还设计了一种新的用户交互方式,该方式通过外推边界引导线的形式控制外推部分的轮廓走向,从而实现了布局可调的山水画外推效果,扩展了上述BDFF-GAN网络的生成多样性和应用互动性.
Keyword :
U型网络 U型网络 中国山水画外推 中国山水画外推 双向解码特征融合 双向解码特征融合 局部鉴别器 局部鉴别器 生成对抗网 生成对抗网
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 符涛 , 陈昭炯 , 叶东毅 . 基于生成对抗网的中国山水画双向解码特征融合外推算法 [J]. | 计算机研究与发展 , 2022 , 59 (12) : 2816-2830 . |
MLA | 符涛 et al. "基于生成对抗网的中国山水画双向解码特征融合外推算法" . | 计算机研究与发展 59 . 12 (2022) : 2816-2830 . |
APA | 符涛 , 陈昭炯 , 叶东毅 . 基于生成对抗网的中国山水画双向解码特征融合外推算法 . | 计算机研究与发展 , 2022 , 59 (12) , 2816-2830 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
A dynamic texture refers to a sequence of images with regular and periodic changes in appearance from one image to another. Typical dynamic textures include burning flames, waving flags and flowing water. Given a reference dynamic texture, dynamic texture generation is meant to generate new dynamic textures that are of similar appearance to the reference one but are more diversified in terms of dynamics (i.e., periodic pattern changes). Existing dynamic texture generation algorithms may suffer from partially losing appearance information of the reference dynamic texture, resulting in such problems as structural distortion and blurred edges in the generated dynamic textures. To address this issue, we propose in this paper a structure-preserving dynamic texture generation algorithm based on a new two-stream convolutional networks model with feature reuse strategy (TSCN-FR). The proposed TSCN-FR makes use of spatial co-occurrence feature matrices to capture the spatial structure correlation of dynamic textures, thus alleviating the structural distortion and disordered color patch distribution in the generation of dynamic textures. In addition, an edge-preserving regular term, which uses the edge features of the reference dynamic texture extracted by the LoG operator, is introduced in the training of TSCN-FR to better maintain the edge structure of the reference dynamic texture. The application of our proposed method to dynamic style transferring is also presented. Experimental results have shown that our method for dynamic texture generation improves the existing methods by a clear margin on the ability to keep appearance information of the reference dynamic texture.
Keyword :
Convolutional neural network Convolutional neural network Dynamic texture Dynamic texture Dynamic texture generation Dynamic texture generation Edge preserving Edge preserving Feature reuse Feature reuse
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Ling-chen , Ye, Dong-yi , Chen, Zhao-jiong . Structure-preserving dynamic texture generation algorithm [J]. | NEURAL COMPUTING & APPLICATIONS , 2021 , 33 (14) : 8299-8318 . |
MLA | Wu, Ling-chen et al. "Structure-preserving dynamic texture generation algorithm" . | NEURAL COMPUTING & APPLICATIONS 33 . 14 (2021) : 8299-8318 . |
APA | Wu, Ling-chen , Ye, Dong-yi , Chen, Zhao-jiong . Structure-preserving dynamic texture generation algorithm . | NEURAL COMPUTING & APPLICATIONS , 2021 , 33 (14) , 8299-8318 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Facial attribute transfer aims to transfer target facial attributes-such as beard, bangs and opening mouth-to a face without them in a source facial image while keeping non-target attributes of the face intact. Existing methods for facial attribute transfer are basically homogeneous images oriented, which focus on transferring target attributes to (or between) photorealistic facial images. In this paper, facial attribute transfer between heterogeneous images is addressed, which is a new and more challenging task. More specifically, we propose a bi-directional facial attribute transfer method based on GAN (generative adversarial network) and latent representation in a new way, for the instance based facial attribute transfer that aims to transfer a target facial attribute with its basic shape from a reference photorealistic facial image to a source realistic portrait illustration and vice versa (i.e., erasing the target attribute in the facial image). How to achieve visual style consistency of the transferred attribute in the heterogeneous result images and overcome information dimensionality imbalance between photorealistic facial images and realistic portrait illustrations are the key points in our work. We deal with content and visual style of an image separately in latent representation learning by the composite encoder designed with the architecture of convolutional neural network and fully connected neural network, which is different from previous latent representation based facial attribute transfer methods that mix content and visual style in a latent representation. The approach turns out to well preserve the visual style consistency. Besides, we introduce different multipliers for weights of loss items in our objective functions to balance information imbalance between heterogeneous images. Experiments show that our method is capable of achieving facial attribute transfer between heterogeneous images with good results. For purpose of quantitative analysis, FID scores of our method on a couple of datasets are also given to show its effectiveness.
Keyword :
Facial attribute transfer Facial attribute transfer GAN GAN Heterogeneous images Heterogeneous images Latent representation Latent representation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Shi, Rong-xiao , Ye, Dong-yi , Chen, Zhao-jiong . A bi-directional facial attribute transfer framework: transfer your single facial attribute to a portrait illustration [J]. | NEURAL COMPUTING & APPLICATIONS , 2021 , 34 (1) : 253-270 . |
MLA | Shi, Rong-xiao et al. "A bi-directional facial attribute transfer framework: transfer your single facial attribute to a portrait illustration" . | NEURAL COMPUTING & APPLICATIONS 34 . 1 (2021) : 253-270 . |
APA | Shi, Rong-xiao , Ye, Dong-yi , Chen, Zhao-jiong . A bi-directional facial attribute transfer framework: transfer your single facial attribute to a portrait illustration . | NEURAL COMPUTING & APPLICATIONS , 2021 , 34 (1) , 253-270 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
本文针对布局标签图到仿真中国山水画的转换生成问题进行研究,现有的基于条件生成对抗网络(CGAN)的方法存在色彩和语义失真、网络结构参数量较大的问题.针对这些问题,提出一种局部色彩可控的中国山水画仿真生成方法.方法首先提出并设计了一种面向山水画的多语义标签图作为交互方式,根据内容、技法、颜色3个语义层次归纳出山水画中对象的类别,相应地设计了面向手绘山水画原作的多语义标签图的分层分割生成算法,用于构造"手绘山水画-多语义标签图"数据对以作网络训练之用;其次,提出了轻量化的多尺度颜色类别关注的条件生成对抗网络MS3C-CGAN,引入空间自适应归一化残差块、双线性上采样结构简化并重构原有的UC-Net生成器,将生成器的参数量减少了24.45%.对比实验结果表明,本文方法仿真生成的中国山水画更具色彩艺术真实感、语义内容更为准确,同时通过编辑布局标签图可控制生成山水画中植被的色彩,可应用于艺术教育、设计模拟等领域.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 林锦 , 陈昭炯 , 叶东毅 . 局部色彩可控的中国山水画仿真生成方法 [J]. | 小型微型计算机系统 , 2021 , 42 (9) : 1985-1991 . |
MLA | 林锦 et al. "局部色彩可控的中国山水画仿真生成方法" . | 小型微型计算机系统 42 . 9 (2021) : 1985-1991 . |
APA | 林锦 , 陈昭炯 , 叶东毅 . 局部色彩可控的中国山水画仿真生成方法 . | 小型微型计算机系统 , 2021 , 42 (9) , 1985-1991 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
为了解决在计算资源有限的车载嵌入式设备中车道线检测算法存在实时性差、精度不高的问题,提出了一种带语义分割的轻量化车道线检测算法(SegLaneNet).首先通过简化并联的空洞卷积支路,增加跳跃连接结构,提出新的空洞空间金字塔池化模块(ASPP-tiny);接着定义模型的多尺度输入、跳跃连接的浅层特征与深层特征融合、并联不同采样率的空洞卷积特征融合;再有对自编码器中的上采样与下采样卷积进行剪枝操作,提出一种新的轻量化全卷积语义分割算法SegLaneNet应用于车道线检测;最后与Baseline算法相比,本文的SegLaneNet算法在图森(TuSimple)车道线检测挑战数据集上测试的准确率提高了约2%,假正例(FP)减少了3%以上,假负例(FN)减少了约2%.在GPU服务器上测试运行速度达165帧/秒(FPS),同时在嵌入式设备中运算速度达到16帧/秒(FPS).测试结果表明带语义分割的轻量化车道线检测算法能够满足车载嵌入式设备实时、准确的车道线检测工作.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 陈正斌 , 叶东毅 . 带语义分割的轻量化车道线检测算法 [J]. | 小型微型计算机系统 , 2021 , 42 (9) : 1877-1883 . |
MLA | 陈正斌 et al. "带语义分割的轻量化车道线检测算法" . | 小型微型计算机系统 42 . 9 (2021) : 1877-1883 . |
APA | 陈正斌 , 叶东毅 . 带语义分割的轻量化车道线检测算法 . | 小型微型计算机系统 , 2021 , 42 (9) , 1877-1883 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |