Query:
学者姓名:吴丽君
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
为解决生成对抗网络训练过程中因损失简单加权导致的图像感知质量下降问题,提出损失自适应调整的生成对抗超分辨率网络(LA-GAN).首先,该方法设计通过计算角点分布的相关强度大小,区分规则纹理区域与不规则纹理区域.其次,基于不同区域,设计了区域自适应生成对抗学习框架.在该框架中,网络只在不规则纹理区域中进行对抗学习,提高感知质量.此外,基于下采样图像和图像块相似性的重组图像取代训练集中的高分辨率图像,实现平均绝对损失在不规则纹理区域弱约束网络,在规则纹理区域强约束网络,保证图像信号保真度.最后,通过实验证明经过优化的网络在信号保真度和感知质量方面皆有提升.
Keyword :
区域自适应 区域自适应 损失函数 损失函数 生成对抗网络 生成对抗网络 超分辨率 超分辨率
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 林旭锋 , 吴丽君 , 陈志聪 et al. 损失自适应的高感知质量生成对抗超分辨率网络 [J]. | 福州大学学报(自然科学版) , 2025 , 53 (1) : 26-34 . |
MLA | 林旭锋 et al. "损失自适应的高感知质量生成对抗超分辨率网络" . | 福州大学学报(自然科学版) 53 . 1 (2025) : 26-34 . |
APA | 林旭锋 , 吴丽君 , 陈志聪 , 林培杰 , 程树英 . 损失自适应的高感知质量生成对抗超分辨率网络 . | 福州大学学报(自然科学版) , 2025 , 53 (1) , 26-34 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Defect detection plays a crucial role in ensuring the safety and longevity of structures, with defect region classification particularly beneficial for focusing efforts on potential defect areas. Traditional deep convolutional neural networks (DCNNs) based defect classification networks still have a high number of parameters and computational demands, making them unsuitable for embedded systems. This paper proposes the Adaptive Prior Activation-Based Binary Information Enhancement Network (AOIE-Net), which significantly reduces computational requirements by binarizing weights and activations. Specifically designed for steel defect detection, AOIE-Net optimizes the binary quantization process and enhances feature representation to improve the performance of BNNs in steel defect detection tasks. AOIE-Net introduces a Dual Batch Normalization-based Information Enhancement Block (DBN-IEB) and an Adaptive Binary Activation Independent Optimization (ABA-IO) method to reduce computational complexity while boosting classification accuracy. Experimental results demonstrate that AOIE-Net outperforms state-of-the-art binary neural network models on CIFAR-10, ImageNet, and the NEU-CLS steel defect dataset, achieving classification accuracy of 90.6%, 72.1%, and 99.4%, respectively. The proposed method offers an efficient, low-complexity solution for real-time defect classification in large-scale structural inspections and holds significant potential for practical applications.
Keyword :
binary neural network binary neural network deep learning deep learning enhanced binary information enhanced binary information image classification image classification steels defect steels defect
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Chen, Qingqi , Su, Jingxuan et al. Binary information enhancement network for efficient steel defects detection and classification [J]. | SMART STRUCTURES AND SYSTEMS , 2025 , 35 (3) : 153-162 . |
MLA | Wu, Lijun et al. "Binary information enhancement network for efficient steel defects detection and classification" . | SMART STRUCTURES AND SYSTEMS 35 . 3 (2025) : 153-162 . |
APA | Wu, Lijun , Chen, Qingqi , Su, Jingxuan , Chen, Zhicong , Cheng, Shuying . Binary information enhancement network for efficient steel defects detection and classification . | SMART STRUCTURES AND SYSTEMS , 2025 , 35 (3) , 153-162 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In the field of image super-resolution (SR), deep learning-based models have achieved remarkable success. However, these models often face compatibility issues with low-power devices due to their computational and memory constraints. To address this challenge, numerous lightweight and efficient models have been proposed. While these models typically employ smaller convolutional kernels and shallower architectures to reduce parameter counts and computational complexity, they often neglect the importance of capturing global receptive fields. In this paper, we propose a simple yet effective deep network, termed the dilated-convolutional feature modulation network (DCFMN), to tackle these limitations. Specifically, we introduce a dilated separable modulation unit (DSMU) to aggregate spatial information from diverse large receptive fields. To complement the DSMU, which processes features from a long-range perspective, we further design a local feature enhancement module (LFEM) to extract local contextual information for effective channel fusion. Additionally, by leveraging reparameterization techniques, we ensure that the model incurs no additional computational overhead during inference. Extensive experimental results demonstrate that our DCFMN achieves competitive performance among existing efficient SR methods, while maintaining a compact model size and low computational complexity.
Keyword :
Deep learning Deep learning Image super-resolution Image super-resolution Lightweight Lightweight Reparameterization Reparameterization
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Li, Shan , Chen, Zhicong . Dilated-convolutional feature modulation network for efficient image super-resolution [J]. | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2025 , 22 (2) . |
MLA | Wu, Lijun et al. "Dilated-convolutional feature modulation network for efficient image super-resolution" . | JOURNAL OF REAL-TIME IMAGE PROCESSING 22 . 2 (2025) . |
APA | Wu, Lijun , Li, Shan , Chen, Zhicong . Dilated-convolutional feature modulation network for efficient image super-resolution . | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2025 , 22 (2) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
行人重识别是跨摄像头追踪的关键环节之一,主流方法多采用ImageNet进行预训练,忽视了数据集的域间差异,且以结构庞大的多分支模型居多,模型复杂度较高.本文设计一种行人重识别方法,采用基于原始视频带噪声标签参与监督的方式进行预训练,减少域间差异以提升特征表达能力;以基于注意力的特征融合方式取代残差网络的跳接映射,增强网络的特征提取能力;在网络中嵌入坐标注意力机制,在低复杂度的情况下强化关键特征,抑制低贡献特征;采用随机擦除对输入数据做数据增强以提高泛化能力,联合分类损失、三元组损失和中心损失函数对网络进行监督训练.在公开数据集Market-1501和Duke-MTMC上完成了消融实验,与主流方法对比实验表明本方法在不需要复杂多分支逻辑结构的前提下,仍可达到较高的精度.
Keyword :
残差网络 残差网络 注意力机制 注意力机制 特征融合 特征融合 行人重识别 行人重识别 预训练 预训练
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 南灏 , 吴丽君 . 基于视频预训练和注意力特征融合的行人重识别方法 [J]. | 智能计算机与应用 , 2024 , 14 (1) : 95-101 . |
MLA | 南灏 et al. "基于视频预训练和注意力特征融合的行人重识别方法" . | 智能计算机与应用 14 . 1 (2024) : 95-101 . |
APA | 南灏 , 吴丽君 . 基于视频预训练和注意力特征融合的行人重识别方法 . | 智能计算机与应用 , 2024 , 14 (1) , 95-101 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
为了减少电力信号传输时的数据量,本文提出了一种融合压缩感知和LZW编码的电力数据压缩算法,在保证重构精度不变的条件下对电力数据进行进一步的压缩,提高了整体的压缩率.首先,对压缩感知的各种观测矩阵进行仿真分析;其次,选择使用稀疏随机矩阵作为本文的观测矩阵,提出了一种能够快速完成压缩感知计算的硬件实现方法,并完成了硬件的设计和验证.实验表明,本设计在FPGA器件上的工作频率最高可达200 MHz;整个数据压缩过程的总延时约为16.11μs;在重构误差约为4.83%时,数据压缩率约为36.83%,比仅使用压缩感知提升了约13.17%.
Keyword :
LZW编码 LZW编码 压缩感知 压缩感知 电力信号 电力信号
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 谢宇杰 , 陈志聪 , 吴丽君 . 融合压缩感知和LZW编码的电力数据压缩算法 [J]. | 智能计算机与应用 , 2024 , 14 (1) : 124-129 . |
MLA | 谢宇杰 et al. "融合压缩感知和LZW编码的电力数据压缩算法" . | 智能计算机与应用 14 . 1 (2024) : 124-129 . |
APA | 谢宇杰 , 陈志聪 , 吴丽君 . 融合压缩感知和LZW编码的电力数据压缩算法 . | 智能计算机与应用 , 2024 , 14 (1) , 124-129 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
图像超分辨率任务常用双三次下采样以构造数据集训练网络,但双三次下采样由于退化模型固定,导致网络泛化能力低,无法用于真实世界低分辨率图像.为解决上述问题本文提出预处理模块,通过预处理模块与双三次下采样数据集得到的网络相结合,在减少资源消耗的同时提高其泛化能力.此外,还针对不同的精度需求设计了特征学习训练策略和多任务联调策略.通过根据不同需求采用相应的训练策略,在满足精度需求的同时具有消耗计算资源少、训练速度快以及适用范围广的特点.实验证明,增加预处理模块的网络以较少的模型参数增加量换取了重建效果和感知质量方面的较大提升,并且通过不同策略实现了进一步的精度提高.
Keyword :
多任务学习 多任务学习 计算机视觉 计算机视觉 超分辨率 超分辨率 预处理模块 预处理模块
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 林旭锋 , 吴丽君 . 简化退化模型的真实图像超分辨率网络 [J]. | 网络安全与数据治理 , 2024 , 43 (3) : 34-39 . |
MLA | 林旭锋 et al. "简化退化模型的真实图像超分辨率网络" . | 网络安全与数据治理 43 . 3 (2024) : 34-39 . |
APA | 林旭锋 , 吴丽君 . 简化退化模型的真实图像超分辨率网络 . | 网络安全与数据治理 , 2024 , 43 (3) , 34-39 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Cross-domain object detection is challenging because object detection models are significantly susceptible to domain style. As a popular semi-supervised learning method, the teacher-student framework (pseudo labels from the teacher model supervise the student model) achieves significant accuracy gains in cross-domain object detection. However, it suffers from the domain shift and prone to generate low-quality pseudo labels, which limits the performance. To mitigate this problem, we propose a teacher-student framework that utilizes style transfer method, augmentation strategies, and adversarial learning to address domain shift. Specifically, we design a Fourier style transfer method to reduce the gap between source and target domainswithout altering the semantic information of the objects. Furthermore, we improve the data augmentation strategy, by weakly augmenting the images from the target domain, to avoid the teacher model biased to the source domain. Finally, we employ feature-level adversarial training in the student model which is trained based on images from all domains, allowing features derived from all domains to share similar distributions. This process ensures that the student model produces domain-invariant features. Our approach achieves state-of-the-art performance in several benchmark tests. For example, it achieved 51.6% and 49.9% mAP on Foggy Cityscapes and Clipart1K, respectively.
Keyword :
Adversarial Learning Adversarial Learning Cross-Domain Object Detection Cross-Domain Object Detection Style Transfer Style Transfer Unsupervised Domain Adaptation Unsupervised Domain Adaptation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Cao, Zhe , Chen, Zhicong . Teacher-Student Cross-Domain Object Detection Model Combining Style Transfer and Adversarial Learning [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X , 2024 , 14434 : 334-345 . |
MLA | Wu, Lijun et al. "Teacher-Student Cross-Domain Object Detection Model Combining Style Transfer and Adversarial Learning" . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X 14434 (2024) : 334-345 . |
APA | Wu, Lijun , Cao, Zhe , Chen, Zhicong . Teacher-Student Cross-Domain Object Detection Model Combining Style Transfer and Adversarial Learning . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X , 2024 , 14434 , 334-345 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Video super-resolution is capable of recovering high-resolution images from multiple low-resolution images, where loop structures are a common frame choice for video super-resolution tasks. BasicVSR employs bidirectional propagation and feature alignment to efficiently utilize information from the entire input video. In this work, we improved the performance of the network by revisiting the role of the various modules in BasicVSR and redesigning the network. Firstly, we will maintain centralized communication with the reference frame through the reference-based feature enrichment module after optical flow distortion, which is helpful for handling complex motion, and at the same time, for the selected keyframe, according to the degree of motion deviation of the adjacent frame relative to the keyframe, it is divided into two different regions, and the model with different receptive fields is adopted for feature extraction to further alleviate the accumulation of alignment errors. In the feature correction module, we modify the simple residual block stack to RIR structure, and fuse different levels of features with each other, which can make the final feature information more comprehensive and abundant. In addition, dense connection are introduced in the reconstruction module to promote the full use of hierarchical feature information for better reconstruction. Experimental verification is carried out on two public datasets: Vid4 and REDS4, and the comparative results show that compared with BasicVSR, the PSNR quantitative indexes of the proposed improved model on the two datasets are improved by 0.27dB and 0.33dB, respectively. In addition, from the point of view of visual perception, the model can effectively improve the clarity of the image and reduce artifacts.
Keyword :
Bidirectional propagation Bidirectional propagation Densely connected residual Densely connected residual Feature enrichment module Feature enrichment module Time difference Time difference Video super-resolution Video super-resolution
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Ma, Yong , Chen, Zhicong . Dense video super-resolution time-differential network with feature enrichment module [J]. | SIGNAL IMAGE AND VIDEO PROCESSING , 2024 , 18 (11) : 7887-7897 . |
MLA | Wu, Lijun et al. "Dense video super-resolution time-differential network with feature enrichment module" . | SIGNAL IMAGE AND VIDEO PROCESSING 18 . 11 (2024) : 7887-7897 . |
APA | Wu, Lijun , Ma, Yong , Chen, Zhicong . Dense video super-resolution time-differential network with feature enrichment module . | SIGNAL IMAGE AND VIDEO PROCESSING , 2024 , 18 (11) , 7887-7897 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Super-resolution (SR) algorithms have been broadly applied to improve the visual quality of images. However, the unstable SR results and the difficulty of collecting high-resolution (HR) and low-resolution (LR) image pairs still greatly block its application in the position-sensitive downstream tasks in real world. To address these difficulties, we propose an unpaired image-based cross-domain supervised SR method for position-sensitive downstream tasks (CDS PosSR), which greatly improve the fidelity of geometric positions in the image based on the geometric consistency of the image. Since the different semantic information and root-mean-square error cannot constraint unpaired images during the degradation process, an unpaired image cross-domain supervised hierarchical degradation model is elaborated. Meanwhile, randomly distributed input is adopted, so as to alleviate the problem that the dataset cannot fully cover real-world LR images. According to the experimental results, CDS PosSR not only improves the visual and quantitative performance of the generated images but also outperforms other SR reconstruction algorithms in terms of the fidelity of feature point location and geometry, which can provide support for position-sensitive downstream tasks.
Keyword :
Degradation learning Degradation learning geometric consistency geometric consistency unpaired image super-resolution (SR) unpaired image super-resolution (SR)
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Chen, Lanxin , Chen, Zhicong et al. CDS PosSR: Cross-Domain Supervised Unpaired Image Super-Resolution for Position-Sensitive Downstream Tasks [J]. | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT , 2024 , 73 . |
MLA | Wu, Lijun et al. "CDS PosSR: Cross-Domain Supervised Unpaired Image Super-Resolution for Position-Sensitive Downstream Tasks" . | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 73 (2024) . |
APA | Wu, Lijun , Chen, Lanxin , Chen, Zhicong , Cheng, Shuying , Chen, Zhaohui . CDS PosSR: Cross-Domain Supervised Unpaired Image Super-Resolution for Position-Sensitive Downstream Tasks . | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT , 2024 , 73 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
To address the problem of incomplete segmentation of large objects and miss-segmentation of tiny objects that is universally existing in semantic segmentation algorithms, PACAMNet, a real-time segmentation network based on short-term dense concatenate of parallel atrous convolution and fusion of attentional features is proposed, called PACAMNet. First, parallel atrous convolution is introduced to improve the short-term dense concatenate module. By adjusting the size of the atrous factor, multi-scale semantic information is obtained to ensure that the last layer of the module can also obtain rich input feature maps. Second, attention feature fusion module is proposed to align the receptive fields of deep and shallow feature maps via depth-separable convolutions with different sizes, and the channel attention mechanism is used to generate weights to effectively fuse the deep and shallow feature maps. Finally, experiments are carried out based on both Cityscapes and CamVid datasets, and the segmentation accuracy achieve 77.4% and 74.0% at the inference speeds of 98.7 FPS and 134.6 FPS, respectively. Compared with other methods, PACAMNet improves the inference speed of the model while ensuring higher segmentation accuracy, so PACAMNet achieve a better balance between segmentation accuracy and inference speed.
Keyword :
Atrous convolution Atrous convolution Attention mechanism Attention mechanism Feature fusion Feature fusion Real-time semantic segmentation Real-time semantic segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Qiu, Shangdong , Chen, Zhicong . Real-time semantic segmentation network based on parallel atrous convolution for short-term dense concatenate and attention feature fusion [J]. | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2024 , 21 (3) . |
MLA | Wu, Lijun et al. "Real-time semantic segmentation network based on parallel atrous convolution for short-term dense concatenate and attention feature fusion" . | JOURNAL OF REAL-TIME IMAGE PROCESSING 21 . 3 (2024) . |
APA | Wu, Lijun , Qiu, Shangdong , Chen, Zhicong . Real-time semantic segmentation network based on parallel atrous convolution for short-term dense concatenate and attention feature fusion . | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2024 , 21 (3) . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |