Attention Driven Self-Similarity Capture for Motion Deblurring

Jie Zhang,Chuanfa Zhang,Jiangzhou Wang,Qingyue Xiong,Wenqiang Zhang

IEEE International Conference on Multimedia and Expo (ICME 2021) Oral (CCF B)

Recently, deep learning-based algorithms have brought impressive results in deblurring tasks. However, as an image prior proved important in image restoration tasks, self-similarity was not exploited in motion deblurring. To tackle this problem, we propose an Attention Self-Similarity Capture (ASSC) module, which takes full advantage of self-similarity by capturing long-range feature dependencies. Besides, to achieve a trade-off between performance and efficiency, we design an Enhanced Spatial Attention (ESA) module, which can dynamically adapt to the spatially-varying motion blur. We employ patch-hierarchical architecture composed of the two modules mentioned above with parameter-free feature flow between different levels. Moreover, we build two large-scale datasets, GOPRO-Supplement and SONY-Extension, to expand the public GOPRO dataset’s scene and resolution. Extensive experiments demonstrate that our method outperforms state-of-the-art methods on both the public GOPRO dataset and our datasets.