Please use this identifier to cite or link to this item: http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/3954
Title: (MLE2A2U)-Net: Image super-resolution via multi-level edge embedding and aggregated attentive upsampler network
Authors: Mehta, N.
Murala, S.
Keywords: Aggregated up-sampling
Edge embedding
Feature refinement
Image super-resolution
Multi-scale feature learning
Issue Date: 5-Sep-2022
Abstract: Given a degraded low-resolution input image, super-resolution (SR) aims at restoring the lost textures and structures and generating high-resolution image content. Significant advances in image super-resolution have been made lately, dominated by convolutional neural networks (CNNs). The top performing CNN-based SR networks typically employ very deep models for embracing the benefits of generating spatially precise results, but at the cost of loss of long-term contextual information. Additionally, state-of-the-art (SOTA) methods generally lack in maintaining the balance between spatial details and contextual information, which is the basic requirement for exhibiting superior performance in SR task. For restoration application like SR, the overall network generally demands efficient preservation of low-frequency information and reconstruction of high-frequency details. Thus, our work presents a novel architecture with the holistic objective of maintaining spatially-precise representation by collecting contextual content and restoring multi-frequency information throughout the network. Our proposed model learns an enriched set of features, that besides combining contextual information from multiple scales simultaneously preserves the high-resolution spatial details. The core of our approach is a novel non-local and local attention (NLLA) block which focuses on (1) learning enriched features by collecting information from multiple scales, (2) simultaneously handling the different frequency information, and (3) effectively fusing the relevant low-frequency and high-frequency information by ignoring the redundant features. Additionally, for effectively mapping the low-resolution features to high-resolution, we propose a novel aggregated attentive up-sampler (AAU) block that attentively learns the weights to up-sample the refined low-resolution feature maps to high-resolution output. Extensive experiments on the benchmark SR datasets demonstrate that the proposed method achieves appealing performance, both qualitatively and quantitatively.
URI: http://localhost:8080/xmlui/handle/123456789/3954
Appears in Collections:Year-2022

Files in This Item:
File Description SizeFormat 
Full Text.pdf5.85 MBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.