Abstract:
Given a degraded low-resolution input image, super-resolution (SR) aims at restoring the lost textures and structures and generating high-resolution image content. Significant advances in image super-resolution have been made lately, dominated by convolutional neural networks (CNNs). The top performing CNN-based SR networks typically employ very deep models for embracing the benefits of generating spatially precise results, but at the cost of loss of long-term contextual information. Additionally, state-of-the-art (SOTA) methods generally lack in maintaining the balance between spatial details and contextual information, which is the basic requirement for exhibiting superior performance in SR task. For restoration application like SR, the overall network generally demands efficient preservation of low-frequency information and reconstruction of high-frequency details. Thus, our work presents a novel architecture with the holistic objective of maintaining spatially-precise representation by collecting contextual content and restoring multi-frequency information throughout the network. Our proposed model learns an enriched set of features, that besides combining contextual information from multiple scales simultaneously preserves the high-resolution spatial details. The core of our approach is a novel non-local and local attention (NLLA) block which focuses on (1) learning enriched features by collecting information from multiple scales, (2) simultaneously handling the different frequency information, and (3) effectively fusing the relevant low-frequency and high-frequency information by ignoring the redundant features. Additionally, for effectively mapping the low-resolution features to high-resolution, we propose a novel aggregated attentive up-sampler (AAU) block that attentively learns the weights to up-sample the refined low-resolution feature maps to high-resolution output. Extensive experiments on the benchmark SR datasets demonstrate that the proposed method achieves appealing performance, both qualitatively and quantitatively.