BANet: Blur-aware Attention Networks for Dynamic Scene Deblurring

19 Jan 2021  ·  Fu-Jen Tsai, Yan-Tsung Peng, Yen-Yu Lin, Chung-Chi Tsai, Chia-Wen Lin ·

Image motion blur results from a combination of object motions and camera shakes, and such blurring effect is generally directional and non-uniform. Previous research attempted to solve non-uniform blurs using self-recurrent multiscale, multi-patch, or multi-temporal architectures with self-attention to obtain decent results. However, using self-recurrent frameworks typically lead to a longer inference time, while inter-pixel or inter-channel self-attention may cause excessive memory usage. This paper proposes a Blur-aware Attention Network (BANet), that accomplishes accurate and efficient deblurring via a single forward pass. Our BANet utilizes region-based self-attention with multi-kernel strip pooling to disentangle blur patterns of different magnitudes and orientations and cascaded parallel dilated convolution to aggregate multi-scale content features. Extensive experimental results on the GoPro and RealBlur benchmarks demonstrate that the proposed BANet performs favorably against the state-of-the-arts in blurred image restoration and can provide deblurred results in real-time.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Deblurring GoPro BANet PSNR 32.54 # 26
SSIM 0.957 # 23
Image Deblurring GoPro BANet SSIM 0.957 # 22
Deblurring HIDE (trained on GOPRO) BANet PSNR (sRGB) 30.16 # 12
SSIM (sRGB) 0.93 # 12
Deblurring RealBlur-J BANet SSIM (sRGB) 0.923 # 6
PSNR (sRGB) 32.00 # 7
Deblurring RealBlur-R BANet PSNR (sRGB) 39.55 # 5
SSIM (sRGB) 0.971 # 6

Methods