Please use this identifier to cite or link to this item:
http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/4038
Title: | FASNet: Feature Aggregation and Sharing Network for Image Inpainting |
Authors: | Phutke, S.S. Murala, S. |
Keywords: | Adversarial learning edge refinement feature aggregation feature sharing image inpainting. |
Issue Date: | 22-Sep-2022 |
Abstract: | Image inpainting is a reconstruction method, where a corrupted image consisting of holes is filled with the most relevant contents from the valid region of an image. To inpaint an image, we have proposed a lightweight cascaded architecture with 2.5 M parameters consisting of encoder feature aggregation block (FAB) with decoder feature sharing (DFS) inpainting network followed by a refinement network. Initially, the FAB with DFS (inpainting) generator network is proposed which comprises of multi-level feature aggregation mechanism and feature sharing decoder. The FAB makes use of multi-scale spatial channel-wise attention to fuse weighted features from all the encoder levels. The DFS reconstructs the inpainted image with multi-scale and multi-receptive feature sharing in order to inpaint the image with smaller to larger hole regions effectively. Further, the refinement generator network is proposed for refining the inpainted image from the inpainting generator network. The effectiveness of proposed architecture is verified on CelebA-HQ, Paris Street View (PARIS_SV) and Places2 datasets corrupted using publicly available NVIDIA mask dataset. Extensive result analysis with detailed ablation study prove the robustness of the proposed architecture over state-of-the-art methods for image inpainting. |
URI: | http://localhost:8080/xmlui/handle/123456789/4038 |
Appears in Collections: | Year-2022 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Full Text.pdf | 1.8 MB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.