Yazar "Zhang, Dehuan" seçeneğine göre listele
Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe ReX-Net: A reflectance-guided underwater image enhancement network for extreme scenarios(Pergamon-Elsevier Science Ltd, 2023) Zhang, Dehuan; Zhou, Jingchun; Zhang, Weishi; Lin, Zifan; Yao, Jian; Polat, KemalDue to the complex underwater environment, underwater images exhibit different degradation characteristics, severely affecting their practical applications. Although underwater image enhancement networks with physical priors exist, the statistical priors are not applicable in extreme underwater scenes. Therefore, we propose ReXNet, a reflectance-guided underwater image enhancement network for extreme scenarios. ReX-Net leverages the complementary information of reflectance and the underwater image obtained through the original encoder and reflectance encoder to minimize the impact of different scene environments. As underwater images contain object information at different scales, the encoder includes a TriFuse reflected-image object extractor module (TRIOE), which employs Tri-scale convolutions to capture features at different scales and utilize attention mechanisms to enhance channel and spatial information. In the decoder, we design a context sensitive multi-level integration module (CSMLI) to fuse feature vectors at different resolutions, thereby improving the expressiveness and robustness of features while avoiding artifacts and ensuring pixel accuracy. Experiments on multiple datasets demonstrate that ReX-Net outperforms existing methods. Furthermore, application experiments show the practicality of ReX-Net in other visualization tasks.Öğe Robust underwater image enhancement with cascaded multi-level sub-networks and triple attention mechanism(Pergamon-Elsevier Science Ltd, 2024) Zhang, Dehuan; Wu, Chenyu; Zhou, Jingchun; Zhang, Weishi; Lin, Zifan; Polat, Kemal; Alenezi, FayadhWith the growing exploration of marine resources, underwater image enhancement has gained significant attention. Recent advances in convolutional neural networks (CNN) have greatly impacted underwater image enhancement techniques. However, conventional CNN-based methods typically employ a single network structure, which may compromise robustness in challenging conditions. Additionally, commonly used UNet networks generally force fusion from low to high resolution for each layer, leading to inaccurate contextual information encoding. To address these issues, we propose a novel network called Cascaded Network with Multi-level Sub-networks (CNMS), which encompasses the following key components: (a) a cascade mechanism based on local modules and global networks for extracting feature representations with richer semantics and enhanced spatial precision, (b) information exchange between different resolution streams, and (c) a triple attention module for extracting attention-based features. CNMS selectively cascades multiple subnetworks through triple attention modules to extract distinct features from underwater images, bolstering the network's robustness and improving generalization capabilities. Within the sub-network, we introduce a Multi-level Sub-network (MSN) that spans multiple resolution streams, combining contextual information from various scales while preserving the original underwater images' high-resolution spatial details. Comprehensive experiments on multiple underwater datasets demonstrate that CNMS outperforms state-of-the-art methods in image enhancement tasks.Öğe TANet: Transmission and atmospheric light driven enhancement of underwater images(Pergamon-Elsevier Science Ltd, 2024) Zhang, Dehuan; Guo, Yakun; Zhou, Jingchun; Zhang, Weishi; Lin, Zifan; Polat, Kemal; Alenezi, FayadhMotivated by the adverse impact of light attenuation and scattering, which leads to color distortion and low contrast in underwater images, our study primarily focuses on enhancement techniques for these images using localized transmission feature analysis and global atmospheric light feature extraction. To this end, we propose a novel approach, named TANet, drawing upon the dynamics of transmission and atmospheric light. TANet integrates two primary components: a spatial domain-based Transmission-Driven Refinement module (TDR) and a frequency domain-based Atmospheric Light Removal Fourier Module (ALRF). The TDR module employs a Gated Multipurpose Unit with dual branches, selectively regulating input features. This allows for a refined merging of feature vectors that subsequently interact, enabling cross-channel feature integration. Capitalizing on the correlation between transmission and image quality, TDR facilitates the detailed enhancement of underwater images by depicting the perceived transmission across distinct image sections. Given that atmospheric light exhibits different attenuation rates under water due to varying wavelengths, and considering that atmospheric light is globally constant, thereby influencing underwater image capture, we developed the ALRF module. This caters to the processing of global information within the frequency domain, efficiently negating atmospheric light's impact on underwater images and enhancing their quality and visibility. Our TANet's superior performance is affirmed by extensive experimental results, demonstrating its effectiveness in underwater image enhancement.