Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LKD-Net: Large Kernel Convolution Network for Single Image Dehazing (2209.01788v1)

Published 5 Sep 2022 in cs.CV and cs.LG

Abstract: The deep convolutional neural networks (CNNs)-based single image dehazing methods have achieved significant success. The previous methods are devoted to improving the network's performance by increasing the network's depth and width. The current methods focus on increasing the convolutional kernel size to enhance its performance by benefiting from the larger receptive field. However, directly increasing the size of the convolutional kernel introduces a massive amount of computational overhead and parameters. Thus, a novel Large Kernel Convolution Dehaze Block (LKD Block) consisting of the Decomposition deep-wise Large Kernel Convolution Block (DLKCB) and the Channel Enhanced Feed-forward Network (CEFN) is devised in this paper. The designed DLKCB can split the deep-wise large kernel convolution into a smaller depth-wise convolution and a depth-wise dilated convolution without introducing massive parameters and computational overhead. Meanwhile, the designed CEFN incorporates a channel attention mechanism into Feed-forward Network to exploit significant channels and enhance robustness. By combining multiple LKD Blocks and Up-Down sampling modules, the Large Kernel Convolution Dehaze Network (LKD-Net) is conducted. The evaluation results demonstrate the effectiveness of the designed DLKCB and CEFN, and our LKD-Net outperforms the state-of-the-art. On the SOTS indoor dataset, our LKD-Net dramatically outperforms the Transformer-based method Dehamer with only 1.79% #Param and 48.9% FLOPs. The source code of our LKD-Net is available at https://github.com/SWU-CS-MediaLab/LKD-Net.

Citations (21)

Summary

We haven't generated a summary for this paper yet.