Query:
学者姓名:陈志聪
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
为解决生成对抗网络训练过程中因损失简单加权导致的图像感知质量下降问题,提出损失自适应调整的生成对抗超分辨率网络(LA-GAN).首先,该方法设计通过计算角点分布的相关强度大小,区分规则纹理区域与不规则纹理区域.其次,基于不同区域,设计了区域自适应生成对抗学习框架.在该框架中,网络只在不规则纹理区域中进行对抗学习,提高感知质量.此外,基于下采样图像和图像块相似性的重组图像取代训练集中的高分辨率图像,实现平均绝对损失在不规则纹理区域弱约束网络,在规则纹理区域强约束网络,保证图像信号保真度.最后,通过实验证明经过优化的网络在信号保真度和感知质量方面皆有提升.
Keyword :
区域自适应 区域自适应 损失函数 损失函数 生成对抗网络 生成对抗网络 超分辨率 超分辨率
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 林旭锋 , 吴丽君 , 陈志聪 et al. 损失自适应的高感知质量生成对抗超分辨率网络 [J]. | 福州大学学报(自然科学版) , 2025 , 53 (1) : 26-34 . |
MLA | 林旭锋 et al. "损失自适应的高感知质量生成对抗超分辨率网络" . | 福州大学学报(自然科学版) 53 . 1 (2025) : 26-34 . |
APA | 林旭锋 , 吴丽君 , 陈志聪 , 林培杰 , 程树英 . 损失自适应的高感知质量生成对抗超分辨率网络 . | 福州大学学报(自然科学版) , 2025 , 53 (1) , 26-34 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Reliable identification of gunshot events is crucial for reducing gun violence and enhancing public safety. However, current gunshot detection and recognition methods are still affected by complex shooting scenarios, various nongunshot events, diverse firearm types, and scarce gunshot datasets. To address these issues, based on triaxial acceleration of guns, a novel general deep transfer learning approach is proposed for gunshot detection and recognition, which combines a temporal deep learning model with transfer learning and automated machine learning (AutoML) to improve the accuracy, reliability and generalization performance. First, a new gunshot recognition model named as MobileNetTime is proposed for the two-class gunshot event detection, three-class coarse firearm recognition, and 15-class fine firearm recognition, which utilizes 1-D convolution and inverted residual modules to autonomously extract higher-level features from the time series acceleration data. Second, considering the impact of nongunshot events, the AutoML is employed for model fine tuning, to transfer the pretrained MobileNetTime from the handgun to various firearm types. In addition, we propose a low-power versatile gunshot recognition system framework employing a triaxial accelerometer for both of wrist-worn and gun-embedded scenarios, which adopts a two-stage wake-up mechanism that selectively monitors gunshot events using temporal and spectral energy features. The experimental results on the two gunshot datasets DGUWA and GRD show that the proposed model can achieve up to 100% accuracy on the DGUWA dataset and 98.98% accuracy on the GRD dataset for the two-class gunshot detection. Moreover, the proposed deep transfer learning approach achieves a 98.98% accuracy for 16-class firearm classification, which is 6.21% higher than the model without transfer learning.
Keyword :
Accelerometers Accelerometers Accuracy Accuracy Adaptation models Adaptation models Automated machine learning (AutoML) Automated machine learning (AutoML) Data models Data models deep transfer learning deep transfer learning Feature extraction Feature extraction gunshot detection and recognition gunshot detection and recognition Internet of Things Internet of Things Monitoring Monitoring Real-time systems Real-time systems Training Training Transfer learning Transfer learning tri-axial acceleration tri-axial acceleration
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Zhicong , Zheng, Haoxin , Wu, Lijun et al. Deep-Transfer-Learning-Based Intelligent Gunshot Detection and Firearm Recognition Using Tri-Axial Acceleration [J]. | IEEE INTERNET OF THINGS JOURNAL , 2025 , 12 (5) : 5891-5900 . |
MLA | Chen, Zhicong et al. "Deep-Transfer-Learning-Based Intelligent Gunshot Detection and Firearm Recognition Using Tri-Axial Acceleration" . | IEEE INTERNET OF THINGS JOURNAL 12 . 5 (2025) : 5891-5900 . |
APA | Chen, Zhicong , Zheng, Haoxin , Wu, Lijun , Huang, Jingchang , Yang, Yang . Deep-Transfer-Learning-Based Intelligent Gunshot Detection and Firearm Recognition Using Tri-Axial Acceleration . | IEEE INTERNET OF THINGS JOURNAL , 2025 , 12 (5) , 5891-5900 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Defect detection plays a crucial role in ensuring the safety and longevity of structures, with defect region classification particularly beneficial for focusing efforts on potential defect areas. Traditional deep convolutional neural networks (DCNNs) based defect classification networks still have a high number of parameters and computational demands, making them unsuitable for embedded systems. This paper proposes the Adaptive Prior Activation-Based Binary Information Enhancement Network (AOIE-Net), which significantly reduces computational requirements by binarizing weights and activations. Specifically designed for steel defect detection, AOIE-Net optimizes the binary quantization process and enhances feature representation to improve the performance of BNNs in steel defect detection tasks. AOIE-Net introduces a Dual Batch Normalization-based Information Enhancement Block (DBN-IEB) and an Adaptive Binary Activation Independent Optimization (ABA-IO) method to reduce computational complexity while boosting classification accuracy. Experimental results demonstrate that AOIE-Net outperforms state-of-the-art binary neural network models on CIFAR-10, ImageNet, and the NEU-CLS steel defect dataset, achieving classification accuracy of 90.6%, 72.1%, and 99.4%, respectively. The proposed method offers an efficient, low-complexity solution for real-time defect classification in large-scale structural inspections and holds significant potential for practical applications.
Keyword :
binary neural network binary neural network deep learning deep learning enhanced binary information enhanced binary information image classification image classification steels defect steels defect
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Chen, Qingqi , Su, Jingxuan et al. Binary information enhancement network for efficient steel defects detection and classification [J]. | SMART STRUCTURES AND SYSTEMS , 2025 , 35 (3) : 153-162 . |
MLA | Wu, Lijun et al. "Binary information enhancement network for efficient steel defects detection and classification" . | SMART STRUCTURES AND SYSTEMS 35 . 3 (2025) : 153-162 . |
APA | Wu, Lijun , Chen, Qingqi , Su, Jingxuan , Chen, Zhicong , Cheng, Shuying . Binary information enhancement network for efficient steel defects detection and classification . | SMART STRUCTURES AND SYSTEMS , 2025 , 35 (3) , 153-162 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In the field of image super-resolution (SR), deep learning-based models have achieved remarkable success. However, these models often face compatibility issues with low-power devices due to their computational and memory constraints. To address this challenge, numerous lightweight and efficient models have been proposed. While these models typically employ smaller convolutional kernels and shallower architectures to reduce parameter counts and computational complexity, they often neglect the importance of capturing global receptive fields. In this paper, we propose a simple yet effective deep network, termed the dilated-convolutional feature modulation network (DCFMN), to tackle these limitations. Specifically, we introduce a dilated separable modulation unit (DSMU) to aggregate spatial information from diverse large receptive fields. To complement the DSMU, which processes features from a long-range perspective, we further design a local feature enhancement module (LFEM) to extract local contextual information for effective channel fusion. Additionally, by leveraging reparameterization techniques, we ensure that the model incurs no additional computational overhead during inference. Extensive experimental results demonstrate that our DCFMN achieves competitive performance among existing efficient SR methods, while maintaining a compact model size and low computational complexity.
Keyword :
Deep learning Deep learning Image super-resolution Image super-resolution Lightweight Lightweight Reparameterization Reparameterization
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Li, Shan , Chen, Zhicong . Dilated-convolutional feature modulation network for efficient image super-resolution [J]. | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2025 , 22 (2) . |
MLA | Wu, Lijun et al. "Dilated-convolutional feature modulation network for efficient image super-resolution" . | JOURNAL OF REAL-TIME IMAGE PROCESSING 22 . 2 (2025) . |
APA | Wu, Lijun , Li, Shan , Chen, Zhicong . Dilated-convolutional feature modulation network for efficient image super-resolution . | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2025 , 22 (2) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
为了减少电力信号传输时的数据量,本文提出了一种融合压缩感知和LZW编码的电力数据压缩算法,在保证重构精度不变的条件下对电力数据进行进一步的压缩,提高了整体的压缩率.首先,对压缩感知的各种观测矩阵进行仿真分析;其次,选择使用稀疏随机矩阵作为本文的观测矩阵,提出了一种能够快速完成压缩感知计算的硬件实现方法,并完成了硬件的设计和验证.实验表明,本设计在FPGA器件上的工作频率最高可达200 MHz;整个数据压缩过程的总延时约为16.11μs;在重构误差约为4.83%时,数据压缩率约为36.83%,比仅使用压缩感知提升了约13.17%.
Keyword :
LZW编码 LZW编码 压缩感知 压缩感知 电力信号 电力信号
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 谢宇杰 , 陈志聪 , 吴丽君 . 融合压缩感知和LZW编码的电力数据压缩算法 [J]. | 智能计算机与应用 , 2024 , 14 (1) : 124-129 . |
MLA | 谢宇杰 et al. "融合压缩感知和LZW编码的电力数据压缩算法" . | 智能计算机与应用 14 . 1 (2024) : 124-129 . |
APA | 谢宇杰 , 陈志聪 , 吴丽君 . 融合压缩感知和LZW编码的电力数据压缩算法 . | 智能计算机与应用 , 2024 , 14 (1) , 124-129 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Obtaining the best performance of photovoltaic requires a proper model. This paper uses a new approach based on a modified social network search algorithm combined with the dichotomy method to produce the best parameters of a photovoltaic cell, module, and array. To improve the performance of the parameters to be estimated, a control parameter via a Gaussian and Cauchy distribution is randomly added in the search space to allow the agents to converge to the optimal solution. Then the dichotomy method is inserted into the objective function to compute the best-estimated currents. The application of the proposed model on three different systems, and the subsequent comparison with existing methods show the high accuracy of the proposed method, with the best root mean square error of 6.3554 × 10−4 for the RTC cell, 1.9096 × 10−3 for the Photowatt PWP module and 0.0134 for the 18 PV experimental field. © 2023 Informa UK Limited, trading as Taylor & Francis Group.
Keyword :
Learning algorithms Learning algorithms Mean square error Mean square error Parameter estimation Parameter estimation Photoelectrochemical cells Photoelectrochemical cells Solar panels Solar panels Solar power generation Solar power generation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Gnetchejo, Patrick Juvet , Daniel, Mbadjoun Wapet , Dadjé, Abdouramani et al. A new approach based on modified social network search algorithm combined with dichotomy method for solar photovoltaic parameter estimation [J]. | International Journal of Ambient Energy , 2024 , 45 (1) . |
MLA | Gnetchejo, Patrick Juvet et al. "A new approach based on modified social network search algorithm combined with dichotomy method for solar photovoltaic parameter estimation" . | International Journal of Ambient Energy 45 . 1 (2024) . |
APA | Gnetchejo, Patrick Juvet , Daniel, Mbadjoun Wapet , Dadjé, Abdouramani , Salomé, Ndjakomo Essiane , Pilario, Karl Ezra , Pierre, Ele et al. A new approach based on modified social network search algorithm combined with dichotomy method for solar photovoltaic parameter estimation . | International Journal of Ambient Energy , 2024 , 45 (1) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Cross-domain object detection is challenging because object detection models are significantly susceptible to domain style. As a popular semi-supervised learning method, the teacher-student framework (pseudo labels from the teacher model supervise the student model) achieves significant accuracy gains in cross-domain object detection. However, it suffers from the domain shift and prone to generate low-quality pseudo labels, which limits the performance. To mitigate this problem, we propose a teacher-student framework that utilizes style transfer method, augmentation strategies, and adversarial learning to address domain shift. Specifically, we design a Fourier style transfer method to reduce the gap between source and target domainswithout altering the semantic information of the objects. Furthermore, we improve the data augmentation strategy, by weakly augmenting the images from the target domain, to avoid the teacher model biased to the source domain. Finally, we employ feature-level adversarial training in the student model which is trained based on images from all domains, allowing features derived from all domains to share similar distributions. This process ensures that the student model produces domain-invariant features. Our approach achieves state-of-the-art performance in several benchmark tests. For example, it achieved 51.6% and 49.9% mAP on Foggy Cityscapes and Clipart1K, respectively.
Keyword :
Adversarial Learning Adversarial Learning Cross-Domain Object Detection Cross-Domain Object Detection Style Transfer Style Transfer Unsupervised Domain Adaptation Unsupervised Domain Adaptation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Cao, Zhe , Chen, Zhicong . Teacher-Student Cross-Domain Object Detection Model Combining Style Transfer and Adversarial Learning [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X , 2024 , 14434 : 334-345 . |
MLA | Wu, Lijun et al. "Teacher-Student Cross-Domain Object Detection Model Combining Style Transfer and Adversarial Learning" . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X 14434 (2024) : 334-345 . |
APA | Wu, Lijun , Cao, Zhe , Chen, Zhicong . Teacher-Student Cross-Domain Object Detection Model Combining Style Transfer and Adversarial Learning . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X , 2024 , 14434 , 334-345 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Video super-resolution is capable of recovering high-resolution images from multiple low-resolution images, where loop structures are a common frame choice for video super-resolution tasks. BasicVSR employs bidirectional propagation and feature alignment to efficiently utilize information from the entire input video. In this work, we improved the performance of the network by revisiting the role of the various modules in BasicVSR and redesigning the network. Firstly, we will maintain centralized communication with the reference frame through the reference-based feature enrichment module after optical flow distortion, which is helpful for handling complex motion, and at the same time, for the selected keyframe, according to the degree of motion deviation of the adjacent frame relative to the keyframe, it is divided into two different regions, and the model with different receptive fields is adopted for feature extraction to further alleviate the accumulation of alignment errors. In the feature correction module, we modify the simple residual block stack to RIR structure, and fuse different levels of features with each other, which can make the final feature information more comprehensive and abundant. In addition, dense connection are introduced in the reconstruction module to promote the full use of hierarchical feature information for better reconstruction. Experimental verification is carried out on two public datasets: Vid4 and REDS4, and the comparative results show that compared with BasicVSR, the PSNR quantitative indexes of the proposed improved model on the two datasets are improved by 0.27dB and 0.33dB, respectively. In addition, from the point of view of visual perception, the model can effectively improve the clarity of the image and reduce artifacts.
Keyword :
Bidirectional propagation Bidirectional propagation Densely connected residual Densely connected residual Feature enrichment module Feature enrichment module Time difference Time difference Video super-resolution Video super-resolution
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Ma, Yong , Chen, Zhicong . Dense video super-resolution time-differential network with feature enrichment module [J]. | SIGNAL IMAGE AND VIDEO PROCESSING , 2024 , 18 (11) : 7887-7897 . |
MLA | Wu, Lijun et al. "Dense video super-resolution time-differential network with feature enrichment module" . | SIGNAL IMAGE AND VIDEO PROCESSING 18 . 11 (2024) : 7887-7897 . |
APA | Wu, Lijun , Ma, Yong , Chen, Zhicong . Dense video super-resolution time-differential network with feature enrichment module . | SIGNAL IMAGE AND VIDEO PROCESSING , 2024 , 18 (11) , 7887-7897 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Super-resolution (SR) algorithms have been broadly applied to improve the visual quality of images. However, the unstable SR results and the difficulty of collecting high-resolution (HR) and low-resolution (LR) image pairs still greatly block its application in the position-sensitive downstream tasks in real world. To address these difficulties, we propose an unpaired image-based cross-domain supervised SR method for position-sensitive downstream tasks (CDS PosSR), which greatly improve the fidelity of geometric positions in the image based on the geometric consistency of the image. Since the different semantic information and root-mean-square error cannot constraint unpaired images during the degradation process, an unpaired image cross-domain supervised hierarchical degradation model is elaborated. Meanwhile, randomly distributed input is adopted, so as to alleviate the problem that the dataset cannot fully cover real-world LR images. According to the experimental results, CDS PosSR not only improves the visual and quantitative performance of the generated images but also outperforms other SR reconstruction algorithms in terms of the fidelity of feature point location and geometry, which can provide support for position-sensitive downstream tasks.
Keyword :
Degradation learning Degradation learning geometric consistency geometric consistency unpaired image super-resolution (SR) unpaired image super-resolution (SR)
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Chen, Lanxin , Chen, Zhicong et al. CDS PosSR: Cross-Domain Supervised Unpaired Image Super-Resolution for Position-Sensitive Downstream Tasks [J]. | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT , 2024 , 73 . |
MLA | Wu, Lijun et al. "CDS PosSR: Cross-Domain Supervised Unpaired Image Super-Resolution for Position-Sensitive Downstream Tasks" . | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 73 (2024) . |
APA | Wu, Lijun , Chen, Lanxin , Chen, Zhicong , Cheng, Shuying , Chen, Zhaohui . CDS PosSR: Cross-Domain Supervised Unpaired Image Super-Resolution for Position-Sensitive Downstream Tasks . | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT , 2024 , 73 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
To address the problem of incomplete segmentation of large objects and miss-segmentation of tiny objects that is universally existing in semantic segmentation algorithms, PACAMNet, a real-time segmentation network based on short-term dense concatenate of parallel atrous convolution and fusion of attentional features is proposed, called PACAMNet. First, parallel atrous convolution is introduced to improve the short-term dense concatenate module. By adjusting the size of the atrous factor, multi-scale semantic information is obtained to ensure that the last layer of the module can also obtain rich input feature maps. Second, attention feature fusion module is proposed to align the receptive fields of deep and shallow feature maps via depth-separable convolutions with different sizes, and the channel attention mechanism is used to generate weights to effectively fuse the deep and shallow feature maps. Finally, experiments are carried out based on both Cityscapes and CamVid datasets, and the segmentation accuracy achieve 77.4% and 74.0% at the inference speeds of 98.7 FPS and 134.6 FPS, respectively. Compared with other methods, PACAMNet improves the inference speed of the model while ensuring higher segmentation accuracy, so PACAMNet achieve a better balance between segmentation accuracy and inference speed.
Keyword :
Atrous convolution Atrous convolution Attention mechanism Attention mechanism Feature fusion Feature fusion Real-time semantic segmentation Real-time semantic segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Qiu, Shangdong , Chen, Zhicong . Real-time semantic segmentation network based on parallel atrous convolution for short-term dense concatenate and attention feature fusion [J]. | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2024 , 21 (3) . |
MLA | Wu, Lijun et al. "Real-time semantic segmentation network based on parallel atrous convolution for short-term dense concatenate and attention feature fusion" . | JOURNAL OF REAL-TIME IMAGE PROCESSING 21 . 3 (2024) . |
APA | Wu, Lijun , Qiu, Shangdong , Chen, Zhicong . Real-time semantic segmentation network based on parallel atrous convolution for short-term dense concatenate and attention feature fusion . | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2024 , 21 (3) . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |