• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:陈昭炯

Refining:

Source

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 13 >
考虑多粒度反馈的多轮对话强化学习推荐算法 CSCD PKU
期刊论文 | 2023 , 43 (1) , 15-21 | 计算机应用
Abstract&Keyword Cite Version(2)

Abstract :

多轮对话推荐系统(CRS)以交互的方式获取用户的实时信息,相较于基于协同过滤等的传统推荐方法能够取得更好的推荐效果.然而现有的CRS存在用户偏好捕获不够准确、对话轮数要求过多以及推荐时机不恰当等问题.针对这些问题,提出一种基于深度强化学习且考虑用户多粒度反馈信息的对话推荐算法.不同于现有的CRS,所提算法在每轮对话中同时考虑用户对商品本身以及更细粒度的商品属性的反馈,然后根据收集的多粒度反馈对用户、商品和商品属性特征进行在线更新,并借助深度Q学习网络(DQN)算法分析每轮对话后的环境状态,从而帮助系统作出较为恰当合理的决策动作,使它能够在比较少的对话轮次的情况下分析用户购买商品的原因,更全面地挖掘用户的实时偏好.与对话路径推理(SCPR)算法相比,在Last.fm真实数据集上,算法的15轮推荐成功率提升了46.5%,15轮推荐轮次上缩短了0.314轮;在Yelp真实数据集上,算法保持了相同水平的推荐成功率,但在15轮推荐轮次上缩短了0.51轮.

Keyword :

偏好挖掘 偏好挖掘 反馈信息 反馈信息 多粒度 多粒度 多轮对话推荐系统 多轮对话推荐系统 深度Q学习网络 深度Q学习网络

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 姚华勇 , 叶东毅 , 陈昭炯 . 考虑多粒度反馈的多轮对话强化学习推荐算法 [J]. | 计算机应用 , 2023 , 43 (1) : 15-21 .
MLA 姚华勇 等. "考虑多粒度反馈的多轮对话强化学习推荐算法" . | 计算机应用 43 . 1 (2023) : 15-21 .
APA 姚华勇 , 叶东毅 , 陈昭炯 . 考虑多粒度反馈的多轮对话强化学习推荐算法 . | 计算机应用 , 2023 , 43 (1) , 15-21 .
Export to NoteExpress RIS BibTex

Version :

考虑多粒度反馈的多轮对话强化学习推荐算法 CSCD PKU
期刊论文 | 2023 , 43 (01) , 15-21 | 计算机应用
考虑多粒度反馈的多轮对话强化学习推荐算法 CSCD PKU
期刊论文 | 2023 , 43 (01) , 15-21 | 计算机应用
Matrix Completion via Local Density Loss Optimization CPCI-S
期刊论文 | 2022 , 12083 | THIRTEENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2021)
Abstract&Keyword Cite Version(1)

Abstract :

One of the main methods for dealing with low-rank problems is the matrix completion, which attempts to recover incomplete values in an incomplete matrix. It has been employed in a large number of real-world applications like recommender systems and image recovery. When the data is highly sparse, many practical problems of recommender systems and image recovery are hard to solve. Therefore, we use the fully connected neural networks for the matrix completion to avoid problems caused by sparse matrices. In this paper, we utilize the local density loss function to measure the difference of the matrix completion results, where trainable parameters are updated via calculating the derivatives according to the influence function. The local density loss function effectively measures the deviation between the predicted value and the real value, and the convergence of the model is guaranteed. So as to validate the effectiveness of the proposed method, we conduct substantial experiments in terms of image recovery and recommender systems. In addition, we employ three metrics, including root mean square error and peak signal-to-noise ratio and structure similarity, to measure the recovery accuracy of missing matrix. Experimental results demonstrate that this framework is superior to other state-of-the-art methods in running time and learning performance.

Keyword :

back propagation back propagation density loss density loss image recovery image recovery Matrix completion Matrix completion neural networks neural networks recommender systems recommender systems

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Fang, Hui , Wang, Yunbin , Chen, Zhaojiong . Matrix Completion via Local Density Loss Optimization [J]. | THIRTEENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2021) , 2022 , 12083 .
MLA Fang, Hui 等. "Matrix Completion via Local Density Loss Optimization" . | THIRTEENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2021) 12083 (2022) .
APA Fang, Hui , Wang, Yunbin , Chen, Zhaojiong . Matrix Completion via Local Density Loss Optimization . | THIRTEENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2021) , 2022 , 12083 .
Export to NoteExpress RIS BibTex

Version :

Matrix Completion via Local Density Loss Optimization EI
会议论文 | 2022 , 12083
GAN-Based Bidirectional Decoding Feature Fusion Extrapolation Algorithm of Chinese Landscape Painting [基于生成对抗网的中国山水画双向解码特征融合外推算法] Scopus CSCD PKU
期刊论文 | 2022 , 59 (12) , 2816-2830 | Computer Research and Development
Abstract&Keyword Cite Version(1)

Abstract :

Some extrapolation methods for Chinese landscape painting based on generative adversarial network is proposed in this paper. Existing image extrapolation methods are mainly designed for natural images with large-scale regions containing same objective in each one and with standardized textures, such as grass and sky. They often suffer from blur and boundary semantic inconsistency in extrapolated regions when they are applied to Chinese landscape painting that have complex details, rich gradations and various strokes. To address those problems, a new bidirectional decoding feature fusion network based on generative adversarial network (BDFF-GAN) is proposed. The generator, named UY-Net, is designed with the architecture of U-Net and a multi-scale decoder, which can achieve the function of bidirectional decoding features fusion. Features from different layers of the encoder are assigned to corresponding layers of the multi-scale decoder, where the first-stage feature fusion is achieved by concatenation operations and therefore the connections between features of different scales are enhanced. On the other hand, decoded features from U-Net part and the multi-scale decoder part at same scales are fused by skipping connections to further improve the performance of the generator. Benefiting from the subtle architecture, UY-Net can perform well at semantic features and stroke transmission as well as learning. Moreover, multi-discriminator strategy is adopted in our method. A global discriminator takes the whole result image as the input to control the global consistency, and a local discriminator takes the patch from the junction of source image part and extrapolated part as the input to improve the coherence and details. Experimental results show that BDFF-GAN performs well at semantic features and textures learning with regards to landscape paintings and outperforms existing methods in terms of the semantic content coherence and the naturalness of texture structure with regards to strokes. In addition, we provide an interface that allows users to control the outline of the extrapolated part by boundary guide lines, which achieves the controllability for the layout of extrapolated part and expands the generation diversity and application interactivity of BDFF-GAN. © 2022, Science Press. All right reserved.

Keyword :

Bidirectional decoding feature fusion Bidirectional decoding feature fusion Chinese landscape painting extrapolation Chinese landscape painting extrapolation Generative adversarial network(GAN) Generative adversarial network(GAN) Local discriminator Local discriminator U-Net U-Net

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Fu, T. , Chen, Z. , Ye, D. . GAN-Based Bidirectional Decoding Feature Fusion Extrapolation Algorithm of Chinese Landscape Painting [基于生成对抗网的中国山水画双向解码特征融合外推算法] [J]. | Computer Research and Development , 2022 , 59 (12) : 2816-2830 .
MLA Fu, T. 等. "GAN-Based Bidirectional Decoding Feature Fusion Extrapolation Algorithm of Chinese Landscape Painting [基于生成对抗网的中国山水画双向解码特征融合外推算法]" . | Computer Research and Development 59 . 12 (2022) : 2816-2830 .
APA Fu, T. , Chen, Z. , Ye, D. . GAN-Based Bidirectional Decoding Feature Fusion Extrapolation Algorithm of Chinese Landscape Painting [基于生成对抗网的中国山水画双向解码特征融合外推算法] . | Computer Research and Development , 2022 , 59 (12) , 2816-2830 .
Export to NoteExpress RIS BibTex

Version :

GAN-Based Bidirectional Decoding Feature Fusion Extrapolation Algorithm of Chinese Landscape Painting EI CSCD PKU
期刊论文 | 2022 , 59 (12) , 2816-2830 | Computer Research and Development
A Coloring Algorithm for Flower Line Drawings with Meticulous Effect Based on Semantic Matching of Reference Images EI CSCD PKU
期刊论文 | 2022 , 59 (6) , 1271-1285 | Computer Research and Development
Abstract&Keyword Cite Version(1)

Abstract :

The problem of coloring flower line drawings with meticulous effect based on a reference image is addressed. Existing reference-based coloring algorithms for line drawing are difficult to learn and simulate the unique color gradient effect of meticulous flower paintings. Moreover, the reference image in these algorithms is usually required to have similar geometric layout structure to the line drawing, which limits the applicability of the algorithms. Therefore, it is difficult to directly apply existing algorithms to accomplish coloring of line drawings with meticulous effect. On the basis of conditional generative adversarial network(CGAN) framework, a coloring algorithm for flower line drawings with meticulous effect is proposed by means of semantic matching between the reference image and the line drawing. In terms of network structure design, the proposed algorithm uses U-Net as the basis of the generator and designs two additional sub-modules. One is the semantic positioning sub-module. This module pre-trains a semantic segmentation network to generate a semantic label map of the flower line drawing. The label map is encoded as an adaptive instance normalization affine parameter and then introduced into the coloring model to improve the recognition ability of different semantic regions and the accuracy of color positioning. The other is the color coding sub-module. This module extracts the color features of the reference image, and then splices to the first three decoding layers of the generator, in which way, the color information is injected into the color model. Combining this module with semantic location module, our algorithm enhances the learning and simulation of gradient color pattern. In network training stage, the algorithm does not train the model on 'original meticulous flower work-flower line drawing' data pairs. Instead, a perturbed version of the original work via such perturbation operations as disturbing the original geometric structure is generated and then 'perturbed version-flower line drawing' data pairs are used to train our model, which turns out to reduce the model's dependence on the spatial geometry layout of the original work and to then improve the applicability of the proposed algorithm. The experimental results show that the proposed algorithm has a correct response to the color semantics of the reference image selected by the user. It is also shown that the introduced structure of semantic positioning module and color coding module could improve the simulation effect of gradient colors and realize the colorization of the flower line drawing under the guidance of different reference images, as well as diversified coloring results. © 2022, Science Press. All right reserved.

Keyword :

Color Color Color image processing Color image processing Color matching Color matching Generative adversarial networks Generative adversarial networks Semantics Semantics Semantic Segmentation Semantic Segmentation Semantic Web Semantic Web

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Yuan , Chen, Zhaojiong , Ye, Dongyi . A Coloring Algorithm for Flower Line Drawings with Meticulous Effect Based on Semantic Matching of Reference Images [J]. | Computer Research and Development , 2022 , 59 (6) : 1271-1285 .
MLA Li, Yuan 等. "A Coloring Algorithm for Flower Line Drawings with Meticulous Effect Based on Semantic Matching of Reference Images" . | Computer Research and Development 59 . 6 (2022) : 1271-1285 .
APA Li, Yuan , Chen, Zhaojiong , Ye, Dongyi . A Coloring Algorithm for Flower Line Drawings with Meticulous Effect Based on Semantic Matching of Reference Images . | Computer Research and Development , 2022 , 59 (6) , 1271-1285 .
Export to NoteExpress RIS BibTex

Version :

A Coloring Algorithm for Flower Line Drawings with Meticulous Effect Based on Semantic Matching of Reference Images [基于参考图语义匹配的花卉线稿工笔效果上色算法] Scopus CSCD PKU
期刊论文 | 2022 , 59 (6) , 1271-1285 | Computer Research and Development
Automatic Itinerary Planning Using Triple-Agent Deep Reinforcement Learning SCIE
期刊论文 | 2022 , 23 (10) , 18864-18875 | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
Abstract&Keyword Cite Version(2)

Abstract :

Automatic itinerary planning that provides an epic journey for each traveler is a fundamental yet inefficient task. Most existing planning methods apply heuristic guidelines for certain objective, and thereby favor popular preferred point of interests (POIs) with high probability, which ignore the intrinsic correlation between the POIs exploration, traveler's preferences, and distinctive attractions. To tackle the itinerary planning problem, this paper explores the connections of these three objectives in probabilistic manner based on a Bayesian model and proposes a triple-agent deep reinforcement learning approach, which generates 4-way direction, 4-way distance, and 3-way selection strategy for iteratively determining next POI to visit in the itinerary. Experiments on five real-world cities demonstrate that our triple-agent deep reinforcement learning approach can provide better planning results in comparison with state-of-the-art multiobjective optimization methods.

Keyword :

Automatic itinerary planning Automatic itinerary planning Computer science Computer science deep reinforcement learning deep reinforcement learning multiobjective optimization multiobjective optimization Planning Planning Probabilistic logic Probabilistic logic Reinforcement learning Reinforcement learning Search problems Search problems Space exploration Space exploration Urban areas Urban areas

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Bo-Hao , Han, Jin , Chen, Shengxin et al. Automatic Itinerary Planning Using Triple-Agent Deep Reinforcement Learning [J]. | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS , 2022 , 23 (10) : 18864-18875 .
MLA Chen, Bo-Hao et al. "Automatic Itinerary Planning Using Triple-Agent Deep Reinforcement Learning" . | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 23 . 10 (2022) : 18864-18875 .
APA Chen, Bo-Hao , Han, Jin , Chen, Shengxin , Yin, Jia-Li , Chen, Zhaojiong . Automatic Itinerary Planning Using Triple-Agent Deep Reinforcement Learning . | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS , 2022 , 23 (10) , 18864-18875 .
Export to NoteExpress RIS BibTex

Version :

Automatic Itinerary Planning Using Triple-Agent Deep Reinforcement Learning Scopus
期刊论文 | 2022 , 23 (10) , 18864-18875 | IEEE Transactions on Intelligent Transportation Systems
Automatic Itinerary Planning Using Triple-Agent Deep Reinforcement Learning EI
期刊论文 | 2022 , 23 (10) , 18864-18875 | IEEE Transactions on Intelligent Transportation Systems
基于生成对抗网的中国山水画双向解码特征融合外推算法 CSCD PKU
期刊论文 | 2022 , 59 (12) , 2816-2830 | 计算机研究与发展
Abstract&Keyword Cite Version(2)

Abstract :

研究基于生成对抗网的中国山水画的边界外推问题.现有的图像外推方法主要是针对草地、天空等内容比较单一、纹理比较规范的自然场景进行的,直接将其应用于内容较为复杂、层次丰富、笔触变化多样的中国山水画外推会出现外推内容模糊、与原有图像边界语义不一致等现象.针对上述问题,基于生成对抗网的思想,提出一种新的生成对抗网的双向解码特征融合网络(bidirectional decoding feature fusion generative adversarial network,BDFF-GAN).网络在生成器设计方面,以现有的U型网络(U-Net)为基础,增加一个多尺度解码器,构建一种双向解码特征融合的生成器UY-Net.多尺度解码器抽取编码器不同层级的特征进行交叉互补的组合,增强了不同尺度特征之间的连接交融;同时每一层双向解码的结果还通过条件跳跃连接进一步相互融合.UY-Net设计上的这2个特点有利于网络对山水画不同粒度的语义特征和笔触形态的传递与学习.在鉴别器设计方面,采用全局鉴别器和局部鉴别器相结合的架构,全局鉴别器将整幅山水画作为输入来控制外推结果的全局一致性,局部鉴别器将原有山水画与外推山水画交界处周围的小区域作为输入以提高外推部分与原画作的连贯性和细节生成质量.实验结果表明,与其他方法相比较,所提算法较好地学习到了山水画的语义特征和纹理信息,外推结果在语义内容的连贯性和笔触纹理结构的自然性方面都有更好的表现.此外,还设计了一种新的用户交互方式,该方式通过外推边界引导线的形式控制外推部分的轮廓走向,从而实现了布局可调的山水画外推效果,扩展了上述BDFF-GAN网络的生成多样性和应用互动性.

Keyword :

U型网络 U型网络 中国山水画外推 中国山水画外推 双向解码特征融合 双向解码特征融合 局部鉴别器 局部鉴别器 生成对抗网 生成对抗网

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 符涛 , 陈昭炯 , 叶东毅 . 基于生成对抗网的中国山水画双向解码特征融合外推算法 [J]. | 计算机研究与发展 , 2022 , 59 (12) : 2816-2830 .
MLA 符涛 et al. "基于生成对抗网的中国山水画双向解码特征融合外推算法" . | 计算机研究与发展 59 . 12 (2022) : 2816-2830 .
APA 符涛 , 陈昭炯 , 叶东毅 . 基于生成对抗网的中国山水画双向解码特征融合外推算法 . | 计算机研究与发展 , 2022 , 59 (12) , 2816-2830 .
Export to NoteExpress RIS BibTex

Version :

基于生成对抗网的中国山水画双向解码特征融合外推算法 CSCD PKU
期刊论文 | 2022 , 59 (12) , 2816-2830 | 计算机研究与发展
基于生成对抗网的中国山水画双向解码特征融合外推算法 CSCD PKU
期刊论文 | 2022 , 59 (12) , 2816-2830 | 计算机研究与发展
基于参考图语义匹配的花卉线稿工笔效果上色算法 CSCD PKU
期刊论文 | 2022 , 59 (06) , 1271-1285 | 计算机研究与发展
Abstract&Keyword Cite Version(2)

Abstract :

研究基于参考图像的花卉线稿图的工笔效果上色问题.现有的基于参考图像的线稿图上色算法对工笔花卉画特有的色彩渐变的特点难以学习和模拟;此外通常还要求参考图像与线稿图具有相似的几何布局结构,这也限制了算法的适用性,故而直接采用现有算法难以实现线稿图的工笔效果上色.基于条件生成对抗网(conditional generative adversarial network, CGAN)框架,提出了一种将参考图像与线稿图进行语义匹配的花卉线稿图工笔效果上色算法RBSM-CGAN.该算法在网络结构设计方面,以U型网络(简称U-Net)为生成器基础,设计了2个附加子模块:1)语义定位子模块.该模块预训练了一个语...

Keyword :

工笔花卉上色 工笔花卉上色 条件生成对抗网络 条件生成对抗网络 自适应实例归一化 自适应实例归一化 语义分割网络 语义分割网络 语义匹配 语义匹配

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 李媛 , 陈昭炯 , 叶东毅 . 基于参考图语义匹配的花卉线稿工笔效果上色算法 [J]. | 计算机研究与发展 , 2022 , 59 (06) : 1271-1285 .
MLA 李媛 et al. "基于参考图语义匹配的花卉线稿工笔效果上色算法" . | 计算机研究与发展 59 . 06 (2022) : 1271-1285 .
APA 李媛 , 陈昭炯 , 叶东毅 . 基于参考图语义匹配的花卉线稿工笔效果上色算法 . | 计算机研究与发展 , 2022 , 59 (06) , 1271-1285 .
Export to NoteExpress RIS BibTex

Version :

基于参考图语义匹配的花卉线稿工笔效果上色算法 CSCD PKU
期刊论文 | 2022 , 59 (06) , 1271-1285 | 计算机研究与发展
基于参考图语义匹配的花卉线稿工笔效果上色算法 CSCD PKU
期刊论文 | 2022 , 59 (6) , 1271-1285 | 计算机研究与发展
局部色彩可控的中国山水画仿真生成方法 CSCD PKU
期刊论文 | 2021 , 42 (9) , 1985-1991 | 小型微型计算机系统
Abstract&Keyword Cite Version(2)

Abstract :

本文针对布局标签图到仿真中国山水画的转换生成问题进行研究,现有的基于条件生成对抗网络(CGAN)的方法存在色彩和语义失真、网络结构参数量较大的问题.针对这些问题,提出一种局部色彩可控的中国山水画仿真生成方法.方法首先提出并设计了一种面向山水画的多语义标签图作为交互方式,根据内容、技法、颜色3个语义层次归纳出山水画中对象的类别,相应地设计了面向手绘山水画原作的多语义标签图的分层分割生成算法,用于构造"手绘山水画-多语义标签图"数据对以作网络训练之用;其次,提出了轻量化的多尺度颜色类别关注的条件生成对抗网络MS3C-CGAN,引入空间自适应归一化残差块、双线性上采样结构简化并重构原有的UC-Net生成器,将生成器的参数量减少了24.45%.对比实验结果表明,本文方法仿真生成的中国山水画更具色彩艺术真实感、语义内容更为准确,同时通过编辑布局标签图可控制生成山水画中植被的色彩,可应用于艺术教育、设计模拟等领域.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 林锦 , 陈昭炯 , 叶东毅 . 局部色彩可控的中国山水画仿真生成方法 [J]. | 小型微型计算机系统 , 2021 , 42 (9) : 1985-1991 .
MLA 林锦 et al. "局部色彩可控的中国山水画仿真生成方法" . | 小型微型计算机系统 42 . 9 (2021) : 1985-1991 .
APA 林锦 , 陈昭炯 , 叶东毅 . 局部色彩可控的中国山水画仿真生成方法 . | 小型微型计算机系统 , 2021 , 42 (9) , 1985-1991 .
Export to NoteExpress RIS BibTex

Version :

局部色彩可控的中国山水画仿真生成方法 CSCD PKU
期刊论文 | 2021 , 42 (09) , 1985-1991 | 小型微型计算机系统
局部色彩可控的中国山水画仿真生成方法 CSCD PKU
期刊论文 | 2021 , 42 (09) , 1985-1991 | 小型微型计算机系统
基于背景抑制颜色分布新模型的合成式目标跟踪算法 CSCD PKU
期刊论文 | 2021 , 47 (03) , 630-640 | 自动化学报
Abstract&Keyword Cite

Abstract :

传统的基于直方图分布的目标颜色模型,由于跟踪过程的实时性要求其区间划分不宜过细,因此易导致同一区间有差异的颜色难以区分;此外,还存在易受背景干扰的问题.本文提出一种新的背景抑制目标颜色分布模型,并在此基础上设计了一个合成式的目标跟踪算法.新的颜色分布模型将一阶及二阶统计信息纳入模型,并设计了基于人类视觉特性的权重计算方式,能有效区分同一区间内的差异色且抑制背景颜色在模型中的比重;算法基于该颜色模型构建目标的产生式模型,并引入结合方向梯度直方图(Histogram of oriented gradient, HOG)特征的相关滤波器对目标形状进行判别式建模,同时将两个模型相互融合;针对融合参数不...

Keyword :

模型融合 模型融合 相关滤波器 相关滤波器 粒子群优化 粒子群优化 背景抑制 背景抑制 颜色模型 颜色模型

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 陈昭炯 , 叶东毅 , 林德威 . 基于背景抑制颜色分布新模型的合成式目标跟踪算法 [J]. | 自动化学报 , 2021 , 47 (03) : 630-640 .
MLA 陈昭炯 et al. "基于背景抑制颜色分布新模型的合成式目标跟踪算法" . | 自动化学报 47 . 03 (2021) : 630-640 .
APA 陈昭炯 , 叶东毅 , 林德威 . 基于背景抑制颜色分布新模型的合成式目标跟踪算法 . | 自动化学报 , 2021 , 47 (03) , 630-640 .
Export to NoteExpress RIS BibTex

Version :

A bi-directional facial attribute transfer framework: transfer your single facial attribute to a portrait illustration SCIE
期刊论文 | 2021 , 34 (1) , 253-270 | NEURAL COMPUTING & APPLICATIONS
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(1)

Abstract :

Facial attribute transfer aims to transfer target facial attributes-such as beard, bangs and opening mouth-to a face without them in a source facial image while keeping non-target attributes of the face intact. Existing methods for facial attribute transfer are basically homogeneous images oriented, which focus on transferring target attributes to (or between) photorealistic facial images. In this paper, facial attribute transfer between heterogeneous images is addressed, which is a new and more challenging task. More specifically, we propose a bi-directional facial attribute transfer method based on GAN (generative adversarial network) and latent representation in a new way, for the instance based facial attribute transfer that aims to transfer a target facial attribute with its basic shape from a reference photorealistic facial image to a source realistic portrait illustration and vice versa (i.e., erasing the target attribute in the facial image). How to achieve visual style consistency of the transferred attribute in the heterogeneous result images and overcome information dimensionality imbalance between photorealistic facial images and realistic portrait illustrations are the key points in our work. We deal with content and visual style of an image separately in latent representation learning by the composite encoder designed with the architecture of convolutional neural network and fully connected neural network, which is different from previous latent representation based facial attribute transfer methods that mix content and visual style in a latent representation. The approach turns out to well preserve the visual style consistency. Besides, we introduce different multipliers for weights of loss items in our objective functions to balance information imbalance between heterogeneous images. Experiments show that our method is capable of achieving facial attribute transfer between heterogeneous images with good results. For purpose of quantitative analysis, FID scores of our method on a couple of datasets are also given to show its effectiveness.

Keyword :

Facial attribute transfer Facial attribute transfer GAN GAN Heterogeneous images Heterogeneous images Latent representation Latent representation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shi, Rong-xiao , Ye, Dong-yi , Chen, Zhao-jiong . A bi-directional facial attribute transfer framework: transfer your single facial attribute to a portrait illustration [J]. | NEURAL COMPUTING & APPLICATIONS , 2021 , 34 (1) : 253-270 .
MLA Shi, Rong-xiao et al. "A bi-directional facial attribute transfer framework: transfer your single facial attribute to a portrait illustration" . | NEURAL COMPUTING & APPLICATIONS 34 . 1 (2021) : 253-270 .
APA Shi, Rong-xiao , Ye, Dong-yi , Chen, Zhao-jiong . A bi-directional facial attribute transfer framework: transfer your single facial attribute to a portrait illustration . | NEURAL COMPUTING & APPLICATIONS , 2021 , 34 (1) , 253-270 .
Export to NoteExpress RIS BibTex

Version :

A bi-directional facial attribute transfer framework: transfer your single facial attribute to a portrait illustration EI
期刊论文 | 2022 , 34 (1) , 253-270 | Neural Computing and Applications
10| 20| 50 per page
< Page ,Total 13 >

Export

Results:

Selected

to

Format:
Online/Total:661/6843358
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1