• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:陈国栋

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 16 >
基于多尺度融合和分数阶微分的工地图像增强
期刊论文 | 2024 , 41 (04) , 58-63 | 贵州大学学报(自然科学版)
Abstract&Keyword Cite Version(1)

Abstract :

建筑工地采集的图像通常会有色偏、对比度低和纹理模糊等问题,从而导致无法获得良好的人眼视觉体验和正确的机器视觉处理结果。为此,提出一种基于改进的多尺度融合和自适应分数阶微分的工地图像增强算法。针对工地图像的纹理模糊特征对多尺度融合算法和自适应分数阶微分算法进行改进,采用全局和局部对比度增强图像替换两幅输入图进行多尺度融合,进一步提高图像的对比度;在HSV颜色空间下仅对V通道分量进行自适应分数阶微分且与原始图像进行加权融合,实现在不改变原本颜色的情况下进行纹理增强和弱化伪影现象。实验结果表明,本文算法增强后的图像拥有更自然的色调、更高的对比度和更强的细节表达能力,优于其他图像增强算法。由此,本文方法能够快速且高效地增强低质量的工地图像,提高后续机器视觉处理的精度和速度,在解决工地图像质量欠佳的问题中发挥重要作用。

Keyword :

图像增强 图像增强 多尺度融合 多尺度融合 工地图像 工地图像 自适应分数阶微分 自适应分数阶微分

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 林咸磊 , 陈国栋 , 佘明磊 et al. 基于多尺度融合和分数阶微分的工地图像增强 [J]. | 贵州大学学报(自然科学版) , 2024 , 41 (04) : 58-63 .
MLA 林咸磊 et al. "基于多尺度融合和分数阶微分的工地图像增强" . | 贵州大学学报(自然科学版) 41 . 04 (2024) : 58-63 .
APA 林咸磊 , 陈国栋 , 佘明磊 , 牟宏霖 , 林进浔 . 基于多尺度融合和分数阶微分的工地图像增强 . | 贵州大学学报(自然科学版) , 2024 , 41 (04) , 58-63 .
Export to NoteExpress RIS BibTex

Version :

基于多尺度融合和分数阶微分的工地图像增强
期刊论文 | 2024 , 41 (4) , 58-63 | 贵州大学学报(自然科学版)
网络社会对青年符号消费动机的影响
期刊论文 | 2023 , 25 (01) , 68-71,94 | 五邑大学学报(社会科学版)
Abstract&Keyword Cite Version(1)

Abstract :

当代青年是与网络联接最为紧密的群体,网络生活改变着他们的符号消费行为。符号消费动机可分为社会性和个我性两类。网络化和图像化转变或再造了消费的环境因素,如社会文化、消费情境、享用情境等,它们对青年符号消费两类动机的影响是不同的,总体来说,对阶层和财富炫耀倾向有抑制作用,对个性价值、审美需求有强化作用。通过消费行为表象探察当代青年特有的心理与情感诉求,从社会规范约束和学校教育等方面顺应积极趋势、杜绝不良商业诱导,可以更有效地引领青年群体形成积极健康的社会心态。

Keyword :

动机 动机 符号消费 符号消费 网络社会 网络社会 青年 青年

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 王健 , 陈国栋 . 网络社会对青年符号消费动机的影响 [J]. | 五邑大学学报(社会科学版) , 2023 , 25 (01) : 68-71,94 .
MLA 王健 et al. "网络社会对青年符号消费动机的影响" . | 五邑大学学报(社会科学版) 25 . 01 (2023) : 68-71,94 .
APA 王健 , 陈国栋 . 网络社会对青年符号消费动机的影响 . | 五邑大学学报(社会科学版) , 2023 , 25 (01) , 68-71,94 .
Export to NoteExpress RIS BibTex

Version :

网络社会对青年符号消费动机的影响
期刊论文 | 2023 , 25 (01) , 68-71,94 | 五邑大学学报(社会科学版)
Process Knowledge Distillation for Multi-Person Pose Estimation Scopus
其他 | 2023 , 12707
Abstract&Keyword Cite Version(1)

Abstract :

Existing multi-person pose estimation methods show a tendency for the parameters in the model to become increasingly large in order to improve generalisation performance, requiring huge computational resources. Therefore, we propose a lightweight multi-person pose estimation method called PKDHP (Process Knowledge Distillation for Human Pose estimation), which applies knowledge distillation to pose estimation. PKDHP treats the training process of student models in knowledge distillation as a human learning process, specifically, PKDHP proposes the CABF module to replace the ABF module in ReviewKD aiming to guide the lower level feature through the model's higher level feature, and proposes the Transfer knowledge distillation method to force the student model to imitate the process of feature transfer from the teacher model, thus further simulating the human learning style. Compared with other knowledge distillation methods, we use the same model parameters on two multi-person pose estimation datasets (COCO and MPII) to achieve higher performance. © 2023 SPIE.

Keyword :

Knowledge distillation Knowledge distillation Multi-person pose estimation Multi-person pose estimation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Z. , Yan, Z. , She, M. et al. Process Knowledge Distillation for Multi-Person Pose Estimation [未知].
MLA Zhao, Z. et al. "Process Knowledge Distillation for Multi-Person Pose Estimation" [未知].
APA Zhao, Z. , Yan, Z. , She, M. , Chen, G. . Process Knowledge Distillation for Multi-Person Pose Estimation [未知].
Export to NoteExpress RIS BibTex

Version :

Process Knowledge Distillation for Multi-Person Pose Estimation EI
会议论文 | 2023 , 12707
Text Removal Method Combining Text Location and Image Inpainting Scopus
其他 | 2023 , 12707
Abstract&Keyword Cite Version(1)

Abstract :

In view of the text in image or video on the visual interference and privacy issues, a text removal method combining text location and image inpainting is proposed. Firstly, the method inputs the image with text into the text location network to identify the text contour area, and then generates the masked image to simulate image damage part according to the text contour area. Finally, the damaged image is repaired through the damage reconstruction network to realize text removal. At the same time, in order to ensure the reliability of the text removal, the complementary fusion layer is proposed to ensure the non-text area in image do not change. The experimental results show that compared with the traditional texture and patch repair algorithm our method has a better repair effect. Compared with the improved generative adversarial network method, our method is more close to the original image features. © 2023 SPIE.

Keyword :

contextual attention contextual attention deep learning deep learning generative adversarial network generative adversarial network image inpainting image inpainting text removal text removal

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yan, Z. , Lin, J. , Zhao, Z. et al. Text Removal Method Combining Text Location and Image Inpainting [未知].
MLA Yan, Z. et al. "Text Removal Method Combining Text Location and Image Inpainting" [未知].
APA Yan, Z. , Lin, J. , Zhao, Z. , Chen, G. . Text Removal Method Combining Text Location and Image Inpainting [未知].
Export to NoteExpress RIS BibTex

Version :

Text Removal Method Combining Text Location and Image Inpainting EI
会议论文 | 2023 , 12707
Measurement of human body parameters for human postural assessment via single camera SCIE
期刊论文 | 2023 , 16 (11) | JOURNAL OF BIOPHOTONICS
Abstract&Keyword Cite Version(3)

Abstract :

We present a camera-based human body parameters measurement approach and develop a human postural assessment system. The approach combines the conventional contact measurement method and the non-contact measurement method to overcome some shortcomings in terms of time, expense, and professionalism in early methods. The entire measurement system consists of a computer, a high-definition camera, and the sticky points that are applied to the participant's body before the measurement. The camera captures the triple view image of human body. Then, the human body outline and the joint points of the human skeleton are extracted to locate the bone feature points. Finally, measurements and extractions of the human parameters are made. Experimental results demonstrate that the global postural assessment system provides quantitative guidance for human postural evaluation, and it completely changes how human postural is evaluated. The postural assessment system is significant for early diagnosis of diseases and medical rehabilitation treatment.

Keyword :

human body measurement human body measurement human key points detection human key points detection human segmentation human segmentation postural assessment postural assessment

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yan, Zheng , Zhou, Wenqiang , Chen, Guodong et al. Measurement of human body parameters for human postural assessment via single camera [J]. | JOURNAL OF BIOPHOTONICS , 2023 , 16 (11) .
MLA Yan, Zheng et al. "Measurement of human body parameters for human postural assessment via single camera" . | JOURNAL OF BIOPHOTONICS 16 . 11 (2023) .
APA Yan, Zheng , Zhou, Wenqiang , Chen, Guodong , Xie, Zhexin , Zhao, Ziyang , Zhang, Chentao . Measurement of human body parameters for human postural assessment via single camera . | JOURNAL OF BIOPHOTONICS , 2023 , 16 (11) .
Export to NoteExpress RIS BibTex

Version :

Measurement of human body parameters for human postural assessment via single camera EI
期刊论文 | 2023 , 16 (11) | Journal of Biophotonics
Measurement of human body parameters for human postural assessment via single camera Scopus
期刊论文 | 2023 , 16 (11) | Journal of Biophotonics
Measurement of human body parameters for human postural assessment via single camera
期刊论文 | 2023 , 16 (11) , n/a-n/a | Journal of Biophotonics
基于改进Faster-R-CNN塔式起重机驾驶人员行为监测研究
期刊论文 | 2023 , 13 (09) , 153-157 | 智能计算机与应用
Abstract&Keyword Cite Version(2)

Abstract :

考虑到塔吊驾驶环境的特殊性,为减少塔吊驾驶人员不规范的驾驶行为,降低塔吊事故的发生率,本文结合手部检测的塔吊驾驶人员行为规范监测方法,改进Faster R-CNN算法模型,融合了剪枝、通道注意力机制等算法,提出了CF-R-CNN模型。根据手部和被检测物体的预测框交并比阈值,判断驾驶人员是否存在违规行为。改进后,网络的F1值相比原网络只降低了2.1%,但FPS提高了23.0%,并与FRC-Tiny和Cut-YOLOv3算法进行了对比。实验结果证明,该网络在性能上有一定的提升,达到了实时性检测的要求,可在移动端进行部署。

Keyword :

Faster R-CNN Faster R-CNN 深度学习 深度学习 目标检测 目标检测 通道注意力 通道注意力 驾驶人员检测 驾驶人员检测

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 李亚伟 , 陈文铿 , 林鸿强 et al. 基于改进Faster-R-CNN塔式起重机驾驶人员行为监测研究 [J]. | 智能计算机与应用 , 2023 , 13 (09) : 153-157 .
MLA 李亚伟 et al. "基于改进Faster-R-CNN塔式起重机驾驶人员行为监测研究" . | 智能计算机与应用 13 . 09 (2023) : 153-157 .
APA 李亚伟 , 陈文铿 , 林鸿强 , 张旭生 , 陈子健 , 林进浔 et al. 基于改进Faster-R-CNN塔式起重机驾驶人员行为监测研究 . | 智能计算机与应用 , 2023 , 13 (09) , 153-157 .
Export to NoteExpress RIS BibTex

Version :

基于改进Faster-R-CNN塔式起重机驾驶人员行为监测研究
期刊论文 | 2023 , 13 (09) , 153-157 | 智能计算机与应用
基于DeepFlux算法的建筑施工脚手架间距检测
期刊论文 | 2023 , 13 (8) , 161-164 | 智能计算机与应用
Abstract&Keyword Cite Version(2)

Abstract :

在脚手架坍塌事故仍有发生的背景下,为了避免传统的人工脚手架测量方法低效和高危的缺点,因此采用计算机视觉对脚手架进行安全规范检测的方式.首先,采用DeepFlux算法提取脚手架图片的骨架信息,针对提取效果以及精度不能满足实际需求的问题,将DeepFlux算法中的VGG16 特征提取网络替换为InceptionV3 网络,有效地提高了骨架提取精度.其次,根据提取到的骨架信息,提出一种交点检测算法计算脚手架交点信息.最后,根据交点信息计算得到脚手架杆间像素间距,再采用标靶法换算成实际间距.测试实验结果表明,在对脚手架进行检测的任务中,计算脚手架参数的平均误差在 5%左右,满足脚手架检测的准确性,能够做到代替人工测量实现脚手架的安全规范检测.

Keyword :

深度学习 深度学习 脚手架 脚手架 间距检测 间距检测 骨架提取 骨架提取

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 林鸿强 , 陈文铿 , 黄宏安 et al. 基于DeepFlux算法的建筑施工脚手架间距检测 [J]. | 智能计算机与应用 , 2023 , 13 (8) : 161-164 .
MLA 林鸿强 et al. "基于DeepFlux算法的建筑施工脚手架间距检测" . | 智能计算机与应用 13 . 8 (2023) : 161-164 .
APA 林鸿强 , 陈文铿 , 黄宏安 , 陈国栋 , 黄明炜 , 俞文龙 et al. 基于DeepFlux算法的建筑施工脚手架间距检测 . | 智能计算机与应用 , 2023 , 13 (8) , 161-164 .
Export to NoteExpress RIS BibTex

Version :

基于DeepFlux算法的建筑施工脚手架间距检测
期刊论文 | 2023 , 13 (08) , 161-164 | 智能计算机与应用
基于DeepFlux算法的建筑施工脚手架间距检测
期刊论文 | 2023 , 13 (08) , 161-164 | 智能计算机与应用
一种胶片档案斑块破损掩膜提取算法
期刊论文 | 2023 , 41 (04) , 16-21 | 佳木斯大学学报(自然科学版)
Abstract&Keyword Cite Version(2)

Abstract :

针对胶片档案影像在尺寸大且斑块破损小又多的情况下,需要遍历整张图片才能提取出斑块破损掩膜的问题,提出了帧差法和视觉显著图法相结合的胶片档案斑块破损提取方法。先对输入的胶片档案影像利用帧差法提取出可疑区域,接下来只需对可疑区域像素点,运用视觉显著图方法再进行二重阈值判定,最后将所有像素点判定结果汇总便得到一张二值化的斑块破损掩膜图。实验结果表明:此算法无须计算整帧图像的视觉显著图,就能将破损斑块掩膜较为完整地提取出来,并能兼顾算法效率和误检率。

Keyword :

帧差法 帧差法 斑块破损 斑块破损 胶片档案 胶片档案 视觉显著图 视觉显著图

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 张旭生 , 陈国栋 , 佘明磊 et al. 一种胶片档案斑块破损掩膜提取算法 [J]. | 佳木斯大学学报(自然科学版) , 2023 , 41 (04) : 16-21 .
MLA 张旭生 et al. "一种胶片档案斑块破损掩膜提取算法" . | 佳木斯大学学报(自然科学版) 41 . 04 (2023) : 16-21 .
APA 张旭生 , 陈国栋 , 佘明磊 , 陈子健 , 戴振国 . 一种胶片档案斑块破损掩膜提取算法 . | 佳木斯大学学报(自然科学版) , 2023 , 41 (04) , 16-21 .
Export to NoteExpress RIS BibTex

Version :

一种胶片档案斑块破损掩膜提取算法
期刊论文 | 2023 , 41 (4) , 16-21 | 佳木斯大学学报(自然科学版)
一种胶片档案斑块破损掩膜提取算法
期刊论文 | 2023 , 41 (04) , 16-21 | 佳木斯大学学报(自然科学版)
基于深度估计及改进YOLO X的钢筋间距检测
期刊论文 | 2023 , 41 (03) , 127-131,150 | 佳木斯大学学报(自然科学版)
Abstract&Keyword Cite Version(2)

Abstract :

施工初期对钢筋的规范绑扎是安全施工的基础,其中钢筋的间距是衡量绑扎效果的一大指标,为了防止绑扎后的钢筋间距不满足要求规范而引发的相应安全事故,提出一种基于深度估计模型AdaBins及改进YOLOX的钢筋间距检测方法。利用深度估计算法获得深度图,从多层的钢筋架构中分割中最上层钢筋,再利用目标检测算法对钢筋进行检测并判断其间距是否满足要求。针对钢筋的特点对目标检测算法进行改进,使用注意力模块并改进多尺度结构,只保留感受野最大的两个检测头。实验结果表明,在对钢筋检测任务中,改进模型均值平均精度高达85.35%,高于其他目标检测算法,满足钢筋间距检测的准确性要求。

Keyword :

AdaBins AdaBins YOLO X YOLO X 深度估计 深度估计 钢筋间距检测 钢筋间距检测

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 戴振国 , 陈国栋 , 赵志峰 et al. 基于深度估计及改进YOLO X的钢筋间距检测 [J]. | 佳木斯大学学报(自然科学版) , 2023 , 41 (03) : 127-131,150 .
MLA 戴振国 et al. "基于深度估计及改进YOLO X的钢筋间距检测" . | 佳木斯大学学报(自然科学版) 41 . 03 (2023) : 127-131,150 .
APA 戴振国 , 陈国栋 , 赵志峰 , 张旭生 , 陈子健 , 林进浔 et al. 基于深度估计及改进YOLO X的钢筋间距检测 . | 佳木斯大学学报(自然科学版) , 2023 , 41 (03) , 127-131,150 .
Export to NoteExpress RIS BibTex

Version :

基于深度估计及改进YOLO X的钢筋间距检测
期刊论文 | 2023 , 41 (03) , 127-131,150 | 佳木斯大学学报(自然科学版)
Temporal-masked skeleton-based action recognition with supervised contrastive learning SCIE
期刊论文 | 2023 , 17 (5) , 2267-2275 | SIGNAL IMAGE AND VIDEO PROCESSING
Abstract&Keyword Cite Version(3)

Abstract :

Recent years have seen the resurgence of self-supervised learning in visual representation thanks to Contrastive Learning and Masked Image Modeling. The existing self-supervised methods for skeleton-based action recognition typically learn feature invariance of the data only through contrastive learning. In this paper, we propose a contrast learning method combined with a temporal-masking mechanism of skeleton sequences to encourage the network able to learn action representations other than feature invariance, e.g., occlusion invariance, by implicitly reconstructing the masked sequences. However, the direct masking mechanism destroys the feature consistency of the samples, for which we propose Supervised Positive Sample Mining and self-attention module for embeddings to improve the generalization of the model. First of all, supervised contrastive learning can improve the robustness of models using prior knowledge of labels. Secondly, to avoid excessive masking mechanism that hinders the model from learning the correct occlusion invariance, a self-attention mechanism is necessary, which further discriminate the distance for each action class in the feature space. The results of various experimental protocols on NTU 60, NTU 120, PKU-MMD datasets demonstrate the advantages of our method and that our method outperforms the existing state-of-the-art contrastive methods. Code is available at https://github.com/ZZFCV/SASOiCLR.

Keyword :

Action recognition Action recognition Contrastive learning Contrastive learning Masking mechanism Masking mechanism Supervised positive mining Supervised positive mining

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Zhifeng , Chen, Guodong , Lin, Yuxiang . Temporal-masked skeleton-based action recognition with supervised contrastive learning [J]. | SIGNAL IMAGE AND VIDEO PROCESSING , 2023 , 17 (5) : 2267-2275 .
MLA Zhao, Zhifeng et al. "Temporal-masked skeleton-based action recognition with supervised contrastive learning" . | SIGNAL IMAGE AND VIDEO PROCESSING 17 . 5 (2023) : 2267-2275 .
APA Zhao, Zhifeng , Chen, Guodong , Lin, Yuxiang . Temporal-masked skeleton-based action recognition with supervised contrastive learning . | SIGNAL IMAGE AND VIDEO PROCESSING , 2023 , 17 (5) , 2267-2275 .
Export to NoteExpress RIS BibTex

Version :

Temporal-masked skeleton-based action recognition with supervised contrastive learning
期刊论文 | 2023 , 17 (5) , 2267-2275 | Signal, Image and Video Processing
Temporal-masked skeleton-based action recognition with supervised contrastive learning EI
期刊论文 | 2023 , 17 (5) , 2267-2275 | Signal, Image and Video Processing
Temporal-masked skeleton-based action recognition with supervised contrastive learning Scopus
期刊论文 | 2023 , 17 (5) , 2267-2275 | Signal, Image and Video Processing
10| 20| 50 per page
< Page ,Total 16 >

Export

Results:

Selected

to

Format:
Online/Total:402/6842262
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1