Please use this identifier to cite or link to this item:
http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/4051
Title: | IPDCN2: Improvised Patch-based Deep CNN for facial retouching detection |
Authors: | Sharma, K. Singh, G. Goyal, P. |
Keywords: | Multimedia forensics Facial retouching Convolution neural network Residual skip connection |
Issue Date: | 26-Sep-2022 |
Abstract: | With the advancement of photo editing softwares, nowadays facial retouching becomes a common practice across different social media platforms, Curriculum Vitae (CV) related websites, photo sharing applications, and magazines publishing flawless facial images of celebrities. In this paper, we propose an improvised patch-based deep convolution neural network (IPDCN2) to classify whether a facial image is original or retouched. The proposed network comprises of three stages i.e., pre-processing, high-level features extraction, and classification. Initially, we propose a pre-processing stage to extract only relevant patches from the input image by using 68 facial landmarks. In the second stage, an efficient and robust CNN based on residual learning is employed to extract the high-level hierarchical features from these patches. The proposed network uses the concept of re- sidual learning with the help of max pooling layers to maximize the information flow across the neural network. Lastly, the extracted high-level features are passed to fully-connected layers for classification. The experimental results show that proposed network outperforms the existing state-of-the-art techniques by providing an accuracy of 99.84% on ND-IIITD dataset. Moreover, proposed network provides a classification accuracy of 95.80%, 83.70%, and 97.30% on YMU, VMU, and MIW make-up datasets, respectively. |
URI: | http://localhost:8080/xmlui/handle/123456789/4051 |
Appears in Collections: | Year-2022 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Full Text.pdf | 4.08 MB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.