Please use this identifier to cite or link to this item:
http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/4305
Title: | Pseudo decoder guided light-weight architecture for image inpainting |
Authors: | Phutke, S.S. Murala, S. |
Keywords: | Encoder multi level feature fusion Pseudo decoder Varying receptive fields Space depth correlation Multi-scale loss Image inpainting |
Issue Date: | 15-Dec-2022 |
Abstract: | Image inpainting is one of the most important and widely used approaches where input image is synthesized at the missing regions. This has various applications like undesired object removal, virtual garment shopping, etc. The methods used for image inpainting may use the knowledge of hole locations to effectively regenerate contents in an image. Existing image inpainting methods give astonishing results with coarse-to-fine architectures or with use of guided information like edges, structures, etc. The coarse-to-fine architectures require umpteen resources leading to high computation cost of the architecture. Other methods with edge or structural information depend on the available models to generate guiding information for inpainting. In this context, we have proposed computationally efficient, lightweight network for image inpainting with very less number of parameters (0.97M) and without any guided information. The proposed architecture consists of the multi-encoder level feature fusion module, pseudo decoder and regeneration decoder. The encoder multi level feature fusion module extracts relevant information from each of the encoder levels to merge structural and textural information from various receptive fields. This information is then processed with pseudo decoder followed by space depth correlation module to assist regeneration decoder for inpainting task. The experiments are performed with different types of masks and compared with the state-of-the-art methods on three benchmark datasets i.e., Paris Street View (PARIS_SV), Places2 and CelebA_HQ. Along with this, the proposed network is tested on high resolution images (1024 × 1024 and 2048 × 2048) and compared with the existing methods. The extensive comparison with state-of-the-art methods, computational complexity analysis, and ablation study prove the effectiveness of the proposed framework for image inpainting. |
URI: | http://localhost:8080/xmlui/handle/123456789/4305 |
Appears in Collections: | Year-2022 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Full Text.pdf | 3.61 MB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.